Nevermind. I figured that the error was showing up because I was logged in
as normal user.
I logged in as a superadmina nd it worked!
I have one Question though, once I configure my wiki to authenticate against
LDAP, should it still allow the "Admin" username (Pwd: admin) to log in.
Because that does nto happen in my case.
Thanks
"You are not allowed to view this document or perform this action."
I get this error when I try to create a new Class in my standalone
installation of XWiki.
I have another memeber in the team accessing this installation on my system.
Is that a problem?
Also, we were able to create a class before I made a small change to the
core (just added a print statement to a the UploadAction.java)
and rebuilt the core and copied the new jar to my WEB-INF/lib folder.
Was I supposed to build the Enterprise also?
Please advise.
Thanks
Hi devs,
Since I started working on XWiki, I tried as much as possible to make
it as standards-compliant as possible, but I didn't see the same
tendency from the rest of the team.
The question is, do we want XWiki to be an "almost web 2.0 wannabe"
project, or "the most 2.0" web application ever?
At the dawn of the web, people cared only about the end result, how the
page looked viewed in Netscape or Internet Explorer (sometimes "and"
instead of "or"). But as the web gathered more attention, and people
started to see it as the next big thing, a platform with a lot of
potential, other concerns were raised, such as how the inner and
generated code looks, how efficient is the page, how "zen" is the
interface and the code... All these are part of Web 2.0. Most people
tend to think of it as "the social web", but others also think of it as
"the semantic web" and "the web of trust", and even as "the web of
accessibility" (ubiquitous web). This, however, comes at a price, that
of "high quality".
Like there are software design patterns, that make the difference
between ugly code and good code, the are also Web design patterns, and
Web bad practices. While most of the sites are still ugly, although they
provide ultra-cool features and are very popular (like myspace), there
is an increasing number of sites that are very appreciated by the
developers, like CSSZenGarden. I never use a poorly written site, no
matter how "cool" it is, unless I have to. Sites like AListApart, PPK,
WASP and others keep propagating good practices that each site should
use, like separating the content from style and behavior, using semantic
markup for the content, and not meaningless divs and spans (or worse,
tables), dropping deprecated JS practices like browser detection,
document.all, document.write, trying to keep css hacks to a minimum,
having a site that looks good and works well without CSS or javascript,
and many others. The final goal is to have sites that are efficient,
clean, accessible, browser-independent, but in a full-featured browsing
mode they provide the same (or better) browsing experience as with any
"only looks matter" page.
I can give many examples of sites which are so bad that I get angry
whenever I have to use them (most of them are sites which people must
use, like SNCF, online banking, e-government). They are all colorful and
flashy and shiny and rounded, and I guess mediocre IE users are really
happy with them, but if I don't use IE and I keep my hand on the
keyboard rather than on the mouse, they stop working as expected. The
web is a large platform, with different browsers (agents, to be more
generic) and different users, and each combination is different. This is
why a site must work in any environment, and it must not make
assumptions about the user ("the user is doing what I am doing, so if it
works for me, it works for anybody"). Doing otherwise is a
discrimination, as such a site denies access to many categories of users.
In platform/XE we're following some of these guidelines, but that's not
the case with other products.
Goals: an XWiki product should do its best to ensure that:
- works without javascript
- works without css
- works without images
- works with any input method (I don't usually click the submit button,
but press enter, so doing something only on "onclick" is bad)
- changing the text size doesn't mess the layout
- a crawler does not change the content (GET does not alter the
database, except maybe statistics)
- works from a text browser (links)
- looks good enough when read by a voice browser or displayed with braille
- looks good on different displays, ranging from old 600x480 desktops to
large 4000x3000 displays, 600x800 handhelds and smaller phone displays,
1024x1280 portrait displays, and also when printed
An old and false assumption is that a site that respects these goals
looks like a plain HTML file good enough to display some text, without
any interactivity and no images... No, the idea is to ensure that the
site looks good enough without any browser feature, but when such a
feature is available (like support for javascript or images) these
features enhance the content already available with a new look, or a new
behavior. Of course, it is impossible to ensure that everything that is
available with javascript/flash is also available without it, but this
is limited to very graphical/interactive features, like drawing, games,
or animated content, which is usually done using flash.
BAD practices: each developer should try to avoid doing any of these:
- using style attributes
- putting CSS code inside a page
- putting javascript inside a page (both in a <script> tag and in
onclick attributes)
- putting a lot of divs and spans just for formatting
- using meaningless classnames or ids on HTML elements
- using invalid markup or css
- copy/pasting some snippet from a web page without checking/cleaning
the code; most of these snippets were made in the previous millennium,
and use really bad and deprecated practices, and they don't work (good
enough) with modern browsers or when combined with other snippets and
new frameworks
- writing convoluted css selectors
While it is acceptable to do something like this locally, in the first
phase of writing a feature, it is just for the PoC, and the code should
be cleaned before committing.
GOOD practices: we should try to have this kind of code in the future:
- putting the behavior in an external javascript, and attaching it to
the target elements on onload or using XBL
- using "alt" for images
- keeping the content markup simple and enhancing the content with
needed divs/spans from javascript or using XBL
So, what does this have to do with the title of this message? Well, I
already started simplifying the generated HTML syntax generated by wiki
markup, as proposed in another mail.
I don't like any of the current skins, as they were all done quickly, on
top of the previous one, either preserving the flaws or covering them by
adding more rules and making the skin heavier. Unfortunately, the last
one, Toucan, although it looks very nice and classy, is a nightmare to
modify (and sometimes even to view, in Firefox for Ubuntu users, for
instance, where it freezes the browser). I think everybody would be
happier with a cleaner skin, not a more convoluted one. The way it is
now it is very fragile, lots of bugs discovered since the first version,
most of them because of some improper CSS rules. Fixing something
sometimes breaks something else, like it happens in the current WYSIWYG
editor, and we all agree that the editor is bad. And instead of reducing
the size of the stylesheets, so that it would be easier for other people
to change the skin, we now have a huge file where users easily get lost.
Also, we had to disable the CSS validation because of some invalid CSS
rules that cannot be removed because they are at the core of the skin IE
compatibility. That could have been ensured in a different way, as
albatross shows. Relying on browser hacks is very bad.
So, what can I say about XE:
- The markup is mostly OK, with some minor problems
- The JS needs to be cleaned, as we're using some old snippets that are
very poorly written; all the code should be upgraded to use prototype
- The CSS used to be of medium quality in Albatross, but now is of poor
quality in Toucan
- It works pretty good without javascript, css or images (some problems
with white text on white background without images) and works good from
a text browser
I know that it is time consuming to ensure the quality of the features,
and clients need features more than quality. Still, are we willing to
have products of poor quality? Vincent has been trying to improve the
quality of the java code, and it annoys him when people are not doing
anything to improve this quality by not committing tests, or breaking
the checkstyle, or introducing regressions. Well, I feel the same when
people commit invalid markup, forms with "onclick" submission, invalid
CSS, deprecated javascript... Unfortunately we don't have a checkstyle
tool for the interface, that would automatically break the build when
people do bad commits. We have the validation checks (and one had to be
disabled), and that's the best we can do for the moment. Maybe later
we'll be able to write a test that checks if there is any inline
javascript or css, or the complexity of the HTML markup (like a maximum
depth and forbidding too many imbricated divs).
I'm sorry if this message hurts somebody, I'm not trying to blame
people, just pulling the alarm signal because we're going in the wrong
direction, and we're going deeper and deeper. We should hire people to
work on the QA for the web part, but this kind of skills are hard to
find, and there is enough manpower needed for the core or for adding new
features, we might not have resources to invest in Web QA.
It is hard to introduce this quality requirement in the existing large
projects, but it would be better to start new projects with a clean web
part in mind.
So, what else can we do to improve the quality of the web part? Any ideas?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Hi,
I have a question about sorting search result.
I think XWiki uses XWiki.Results to display search result. Furthermore
XWiki.Results use javascript
/xwiki/skins/albatross/scripts/table/tablefilterNsort.js to sort and
filter the rows. However, the date row in the search result may be
sorted incorrectly.
As defined is the XWiki.Results, the Date row format is "yyyy MMM dd
at HH:mm". Thus the the date will show like "2008 Mar 23 at 22:25" or
"2008 Apr 09 at 16:21". If I sort the date row in the search result
in decreasing order, and I expect the result like this:
2008 Apr 09 at 16:21
2008 Mar 23 at 22:25
as Apr > Mar.
Nevertheless, the actual result is opposite.
After checking the code of tablefilterNsort.js, I found there is no
special sort functions in the script. Thus, the date "yyyy MMM dd at
HH:mm" is treated as a normal string.
I think there are two way to solve this problem.
1. change the date format. don't use Apr. use 04 instead.
To to this, you only need to change a word in XWiki.Results.
change $xwiki.formatDate($bentrydoc.date,"yyyy MMM dd") at
$xwiki.formatDate($bentrydoc.date,"HH:mm")
to $xwiki.formatDate($bentrydoc.date,"yyyy MM dd") at
$xwiki.formatDate($bentrydoc.date,"HH:mm")
2. Change script tablefilterNsort.js. Add a special sort function for
the "yyyy MMM dd at HH:mm" format.
I think that's a very small problem. However, I think I should tell you.
WDYT
Sincerely,
Wang Ning
I think somehow my Group and User rights for Spaces are corrupted. Is there
some way to verify a problem in the hibernate tables? I was running Xwiki
on a Oracle 9 database which I didn't realize was not supported until
reading through the Xwiki Dev maillist. I had many exceptions about clob
mapping in the xwiki log even after upgrading to Oracle 10 jdbc driver which
should have fixed it. I tried to delete and re-add the Groups but when I
re-added the Groups they seemed to already have the rights to the Spaces
they had before I deleted them. It seems like the delete Group failed.
I want to export xwiki data from Oracle 9 and import it into a new Oracle 10
database. The import would allow me to select specific items to import.
So, I tried to use the export Admin function which can successfully export
everything but then the import function cannot read anything very large.
Even if you increase the maximum upload size , and increase heap size you
will still run out of memory on import (my export data was about 45MB). I
found jira bugs and mail posts related to this problem. I also encountered:
XWIKI-1809 during import.
I can use Oracle tools to extract and reload the data but it will reload the
same corrupt data that I was trying to fix.
I found the xwiki-packager-plugin but it seems to have the same issues as
import / export Admin function and seems to be limited to hypersonic db. At
least there is a method called shutdownHSQLDB and a comment about needing to
figure something out for other databases.
Is there some way to validate and fix Xwiki Group / User rights before I
extract using oracle? Is there some way to extract a portion of the data
stored in the hibernate managed tables? I would like to extract data for
each space without any users and groups. I could help on the import /
export if it can be fixed but I'm not sure requiring all xwiki data in
memory at once can be fixed. Has another approach been discussed?
Any suggestions on any of the above issues are welcome. I'd really like to
use Xwiki!
Thanks
Glenn Everitt
--
View this message in context: http://www.nabble.com/Import---Export-Options---Problems-with-rights-tp1657…
Sent from the XWiki- Dev mailing list archive at Nabble.com.
Hi,
I was wondering why my top menu has not a Watch menu and found this
annotation in menuview.vm:
## We're disabling the Watchlist menu for now since the Watchlist
doesn't work yet in multiwiki
## mode. Remove when http://jira.xwiki.org/jira/browse/XPWATCHLIST-4 is
fixed.
As far as I can see in Jira
(http://jira.xwiki.org/jira/browse/XPWATCHLIST-4), this issue was fixed
by Jean-Vicent in r7914.
Please, is there any reason to keep the entry filtered when working in
virtual mode or could it be already unfiltered?
Thanks!
--
Ricardo Rodríguez
Your EPEC Network ICT Team
Hi everyone,
I don't know if you're like me but I have the strong feeling we should
be better at not introducing regressions. One of the goal of XE 1.3
and now again XE 1.4 was more stability and more automated tests.
Several potential problems are occurring now:
1) People using XWiki are expecting more stability when they upgrade.
This is due to the fact that XWiki is improving in general and more
people use it. It's expected that it'll just work and that's a
reasonable expectations
2) We've introduced several important regressions for the past *4*
releases (login/logout/RMUI/Escapes and more). We're following a bas
trend.
3) We've recently introduced several storage changes (Sergiu and
Artem) and I haven't seen tests that would prove what was working
before is still working. I'm not saying it isn't but last time we made
a change to the storage area it took us several months to stabilize it
and we cannot do that again.
4) We're committing more code than tests meaning the overall quality
of XWiki is degrading :(
Thus I'd like to propose that:
A) We become very very careful when committing things and we only
commit when we can *guarantee* that what we've done is working (with a
given level of confidence of course). This can only be achieved
through tests being committed at the same time as the code is committed
B) We stop putting stuff that are NOT critical in point releases. For
example dangerous changes were done in the WYSIWYG editor for 1.3.1
and I'm not confident this was a good thing, and certainly not with so
little tests and verifications since we know that whenever we touch
that editor we introduce problems elsewhere.
C) In general we reduce the number of changes that we commit and
instead we focus on tests and stability. This is indeed one of the
general goals for 1.4.
WDYT?
Thanks
-Vincent
Hello,
I have been looking out for the code /script that handles the UI and the
functionality of the UI for attachments section of an Xwiki page.
These are the places I have looked at:
1) com.xpn.xwiki.web.AttachAction.java
2) com.xpn.xwiki.web.UploadAction.java
3) Fileupload plugin
4) XWiki Enterprise\webapps\xwiki\wiki_editor\plugins\attachments.js
5) HTML source code of any XWiki page !
the reason I am looking for it, is to try to see how I can modify the code
to allow *selection of multiple files for upload at one time* !
In our application, users ight be required to upload more than 10 files at a
time and we do not want them to browse to the file system for every file!
If someone can point out anything that might help, it would be great.
Thanks
Yes it would be better to add any API you need in the current
CalendarPlugin.
There are already a function that generates an ics file but from a
CalendarData object
public String toICal(CalendarData data)
Ludovic
Evelina Slatineanu wrote:
> Hi,
>
>
>
> I've taken a closer look at the Calendar plugin in xwiki-platform-core and I
> believe that maybe is not necessary to create a whole new plugin for
> generating ics files but to include in the existing plugin a method of
> saving a calendar in .ics format. I wait for your opinions.
>
>
>
> Thanks.
>
> Evelina
>
>
>
> From: Evelina Slatineanu [mailto:evelina.slatineanu@xwiki.com]
> Sent: Tuesday, April 08, 2008 12:32 PM
> To: 'devs(a)xwiki.org'
> Subject: Request for iCal4j API
>
>
>
> Hello all,
>
>
>
> I have a situation in Chronopolys where I would like to generate an iCal for
> some deadlines widgets I made. I spoked to JV and Sergiu and find out that
> there are some methods of parsing an iCal file in the Calendar plugin but
> there is nothing to create a new iCal file. So, I would like to propose an
> xwiki generic plugin to both create and parse iCals. I can do this, using
> http://ical4j.sourceforge.net/introduction.html
>
>
>
> Please tell me your opinions.
>
> Thanks,
>
>
>
> Evelina
>
>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature
> database 3008 (20080408) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
> __________ Information from ESET NOD32 Antivirus, version of virus signature
> database 3009 (20080408) __________
>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
> _______________________________________________
> devs mailing list
> devs(a)xwiki.org
> http://lists.xwiki.org/mailman/listinfo/devs
>
>
--
Ludovic Dubost
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost