hello folks,
Do you remember my mail explaining I was working on an evolution of
class/object management for class properties deletion and class inheritance
with backward compatibility?
Here is the status of this study:
My idea around class versioning seems to be working. Now I can delete class
properties and keep managing older and newer class instances. It also
doesn't seem to modify the existing behavior for already embedded
classes/docs/objects in XWiki enterprise.
I have implemented a draft based on a 2weeks old core. Basically, I had to
modify the core code but I have always tried not to modify any existing
logic, as little code as possible and to use existing features as much as
possible.
Naturally, I must implement and perform some more tests to be sure there is
no hidden critical problem.
I still have to study precisely the custommapping issue...
Finally, I'm currently implementing the inheritance mechanism...
As it would be too long to explain everything here and as I prefer that you
see it with your own eyes, I will try asap to mount a demo environment with
an explanation of the why/what/how I did this...
Best regards
Pascal
Hi,
Not really an xwiki problem, but anyway. I'm using xwiki with the
toucan skin (style-red.css). If I make a page with a very big table in
it (2500 rows, 3 columns of 150 characters) the biggest part of the
web page can be viewed but at around 75% rendered, the whole page
becomes black. The area where modification data, comments, attachments
and license should be rendered are also black. Only the last pixels
are normal again, and I see one last bg-RED.png. Has anyone else seen
this behaviour? I get the same on multiple versions of firefox, but
internet explorer and safari render it it fine.
Regards,
Leen
Nevermind. I figured that the error was showing up because I was logged in
as normal user.
I logged in as a superadmina nd it worked!
I have one Question though, once I configure my wiki to authenticate against
LDAP, should it still allow the "Admin" username (Pwd: admin) to log in.
Because that does nto happen in my case.
Thanks
"You are not allowed to view this document or perform this action."
I get this error when I try to create a new Class in my standalone
installation of XWiki.
I have another memeber in the team accessing this installation on my system.
Is that a problem?
Also, we were able to create a class before I made a small change to the
core (just added a print statement to a the UploadAction.java)
and rebuilt the core and copied the new jar to my WEB-INF/lib folder.
Was I supposed to build the Enterprise also?
Please advise.
Thanks
Hi devs,
Since I started working on XWiki, I tried as much as possible to make
it as standards-compliant as possible, but I didn't see the same
tendency from the rest of the team.
The question is, do we want XWiki to be an "almost web 2.0 wannabe"
project, or "the most 2.0" web application ever?
At the dawn of the web, people cared only about the end result, how the
page looked viewed in Netscape or Internet Explorer (sometimes "and"
instead of "or"). But as the web gathered more attention, and people
started to see it as the next big thing, a platform with a lot of
potential, other concerns were raised, such as how the inner and
generated code looks, how efficient is the page, how "zen" is the
interface and the code... All these are part of Web 2.0. Most people
tend to think of it as "the social web", but others also think of it as
"the semantic web" and "the web of trust", and even as "the web of
accessibility" (ubiquitous web). This, however, comes at a price, that
of "high quality".
Like there are software design patterns, that make the difference
between ugly code and good code, the are also Web design patterns, and
Web bad practices. While most of the sites are still ugly, although they
provide ultra-cool features and are very popular (like myspace), there
is an increasing number of sites that are very appreciated by the
developers, like CSSZenGarden. I never use a poorly written site, no
matter how "cool" it is, unless I have to. Sites like AListApart, PPK,
WASP and others keep propagating good practices that each site should
use, like separating the content from style and behavior, using semantic
markup for the content, and not meaningless divs and spans (or worse,
tables), dropping deprecated JS practices like browser detection,
document.all, document.write, trying to keep css hacks to a minimum,
having a site that looks good and works well without CSS or javascript,
and many others. The final goal is to have sites that are efficient,
clean, accessible, browser-independent, but in a full-featured browsing
mode they provide the same (or better) browsing experience as with any
"only looks matter" page.
I can give many examples of sites which are so bad that I get angry
whenever I have to use them (most of them are sites which people must
use, like SNCF, online banking, e-government). They are all colorful and
flashy and shiny and rounded, and I guess mediocre IE users are really
happy with them, but if I don't use IE and I keep my hand on the
keyboard rather than on the mouse, they stop working as expected. The
web is a large platform, with different browsers (agents, to be more
generic) and different users, and each combination is different. This is
why a site must work in any environment, and it must not make
assumptions about the user ("the user is doing what I am doing, so if it
works for me, it works for anybody"). Doing otherwise is a
discrimination, as such a site denies access to many categories of users.
In platform/XE we're following some of these guidelines, but that's not
the case with other products.
Goals: an XWiki product should do its best to ensure that:
- works without javascript
- works without css
- works without images
- works with any input method (I don't usually click the submit button,
but press enter, so doing something only on "onclick" is bad)
- changing the text size doesn't mess the layout
- a crawler does not change the content (GET does not alter the
database, except maybe statistics)
- works from a text browser (links)
- looks good enough when read by a voice browser or displayed with braille
- looks good on different displays, ranging from old 600x480 desktops to
large 4000x3000 displays, 600x800 handhelds and smaller phone displays,
1024x1280 portrait displays, and also when printed
An old and false assumption is that a site that respects these goals
looks like a plain HTML file good enough to display some text, without
any interactivity and no images... No, the idea is to ensure that the
site looks good enough without any browser feature, but when such a
feature is available (like support for javascript or images) these
features enhance the content already available with a new look, or a new
behavior. Of course, it is impossible to ensure that everything that is
available with javascript/flash is also available without it, but this
is limited to very graphical/interactive features, like drawing, games,
or animated content, which is usually done using flash.
BAD practices: each developer should try to avoid doing any of these:
- using style attributes
- putting CSS code inside a page
- putting javascript inside a page (both in a <script> tag and in
onclick attributes)
- putting a lot of divs and spans just for formatting
- using meaningless classnames or ids on HTML elements
- using invalid markup or css
- copy/pasting some snippet from a web page without checking/cleaning
the code; most of these snippets were made in the previous millennium,
and use really bad and deprecated practices, and they don't work (good
enough) with modern browsers or when combined with other snippets and
new frameworks
- writing convoluted css selectors
While it is acceptable to do something like this locally, in the first
phase of writing a feature, it is just for the PoC, and the code should
be cleaned before committing.
GOOD practices: we should try to have this kind of code in the future:
- putting the behavior in an external javascript, and attaching it to
the target elements on onload or using XBL
- using "alt" for images
- keeping the content markup simple and enhancing the content with
needed divs/spans from javascript or using XBL
So, what does this have to do with the title of this message? Well, I
already started simplifying the generated HTML syntax generated by wiki
markup, as proposed in another mail.
I don't like any of the current skins, as they were all done quickly, on
top of the previous one, either preserving the flaws or covering them by
adding more rules and making the skin heavier. Unfortunately, the last
one, Toucan, although it looks very nice and classy, is a nightmare to
modify (and sometimes even to view, in Firefox for Ubuntu users, for
instance, where it freezes the browser). I think everybody would be
happier with a cleaner skin, not a more convoluted one. The way it is
now it is very fragile, lots of bugs discovered since the first version,
most of them because of some improper CSS rules. Fixing something
sometimes breaks something else, like it happens in the current WYSIWYG
editor, and we all agree that the editor is bad. And instead of reducing
the size of the stylesheets, so that it would be easier for other people
to change the skin, we now have a huge file where users easily get lost.
Also, we had to disable the CSS validation because of some invalid CSS
rules that cannot be removed because they are at the core of the skin IE
compatibility. That could have been ensured in a different way, as
albatross shows. Relying on browser hacks is very bad.
So, what can I say about XE:
- The markup is mostly OK, with some minor problems
- The JS needs to be cleaned, as we're using some old snippets that are
very poorly written; all the code should be upgraded to use prototype
- The CSS used to be of medium quality in Albatross, but now is of poor
quality in Toucan
- It works pretty good without javascript, css or images (some problems
with white text on white background without images) and works good from
a text browser
I know that it is time consuming to ensure the quality of the features,
and clients need features more than quality. Still, are we willing to
have products of poor quality? Vincent has been trying to improve the
quality of the java code, and it annoys him when people are not doing
anything to improve this quality by not committing tests, or breaking
the checkstyle, or introducing regressions. Well, I feel the same when
people commit invalid markup, forms with "onclick" submission, invalid
CSS, deprecated javascript... Unfortunately we don't have a checkstyle
tool for the interface, that would automatically break the build when
people do bad commits. We have the validation checks (and one had to be
disabled), and that's the best we can do for the moment. Maybe later
we'll be able to write a test that checks if there is any inline
javascript or css, or the complexity of the HTML markup (like a maximum
depth and forbidding too many imbricated divs).
I'm sorry if this message hurts somebody, I'm not trying to blame
people, just pulling the alarm signal because we're going in the wrong
direction, and we're going deeper and deeper. We should hire people to
work on the QA for the web part, but this kind of skills are hard to
find, and there is enough manpower needed for the core or for adding new
features, we might not have resources to invest in Web QA.
It is hard to introduce this quality requirement in the existing large
projects, but it would be better to start new projects with a clean web
part in mind.
So, what else can we do to improve the quality of the web part? Any ideas?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Hi,
I have a question about sorting search result.
I think XWiki uses XWiki.Results to display search result. Furthermore
XWiki.Results use javascript
/xwiki/skins/albatross/scripts/table/tablefilterNsort.js to sort and
filter the rows. However, the date row in the search result may be
sorted incorrectly.
As defined is the XWiki.Results, the Date row format is "yyyy MMM dd
at HH:mm". Thus the the date will show like "2008 Mar 23 at 22:25" or
"2008 Apr 09 at 16:21". If I sort the date row in the search result
in decreasing order, and I expect the result like this:
2008 Apr 09 at 16:21
2008 Mar 23 at 22:25
as Apr > Mar.
Nevertheless, the actual result is opposite.
After checking the code of tablefilterNsort.js, I found there is no
special sort functions in the script. Thus, the date "yyyy MMM dd at
HH:mm" is treated as a normal string.
I think there are two way to solve this problem.
1. change the date format. don't use Apr. use 04 instead.
To to this, you only need to change a word in XWiki.Results.
change $xwiki.formatDate($bentrydoc.date,"yyyy MMM dd") at
$xwiki.formatDate($bentrydoc.date,"HH:mm")
to $xwiki.formatDate($bentrydoc.date,"yyyy MM dd") at
$xwiki.formatDate($bentrydoc.date,"HH:mm")
2. Change script tablefilterNsort.js. Add a special sort function for
the "yyyy MMM dd at HH:mm" format.
I think that's a very small problem. However, I think I should tell you.
WDYT
Sincerely,
Wang Ning
I think somehow my Group and User rights for Spaces are corrupted. Is there
some way to verify a problem in the hibernate tables? I was running Xwiki
on a Oracle 9 database which I didn't realize was not supported until
reading through the Xwiki Dev maillist. I had many exceptions about clob
mapping in the xwiki log even after upgrading to Oracle 10 jdbc driver which
should have fixed it. I tried to delete and re-add the Groups but when I
re-added the Groups they seemed to already have the rights to the Spaces
they had before I deleted them. It seems like the delete Group failed.
I want to export xwiki data from Oracle 9 and import it into a new Oracle 10
database. The import would allow me to select specific items to import.
So, I tried to use the export Admin function which can successfully export
everything but then the import function cannot read anything very large.
Even if you increase the maximum upload size , and increase heap size you
will still run out of memory on import (my export data was about 45MB). I
found jira bugs and mail posts related to this problem. I also encountered:
XWIKI-1809 during import.
I can use Oracle tools to extract and reload the data but it will reload the
same corrupt data that I was trying to fix.
I found the xwiki-packager-plugin but it seems to have the same issues as
import / export Admin function and seems to be limited to hypersonic db. At
least there is a method called shutdownHSQLDB and a comment about needing to
figure something out for other databases.
Is there some way to validate and fix Xwiki Group / User rights before I
extract using oracle? Is there some way to extract a portion of the data
stored in the hibernate managed tables? I would like to extract data for
each space without any users and groups. I could help on the import /
export if it can be fixed but I'm not sure requiring all xwiki data in
memory at once can be fixed. Has another approach been discussed?
Any suggestions on any of the above issues are welcome. I'd really like to
use Xwiki!
Thanks
Glenn Everitt
--
View this message in context: http://www.nabble.com/Import---Export-Options---Problems-with-rights-tp1657…
Sent from the XWiki- Dev mailing list archive at Nabble.com.
Hi,
I was wondering why my top menu has not a Watch menu and found this
annotation in menuview.vm:
## We're disabling the Watchlist menu for now since the Watchlist
doesn't work yet in multiwiki
## mode. Remove when http://jira.xwiki.org/jira/browse/XPWATCHLIST-4 is
fixed.
As far as I can see in Jira
(http://jira.xwiki.org/jira/browse/XPWATCHLIST-4), this issue was fixed
by Jean-Vicent in r7914.
Please, is there any reason to keep the entry filtered when working in
virtual mode or could it be already unfiltered?
Thanks!
--
Ricardo RodrÃguez
Your EPEC Network ICT Team