Hi Devs,
xwiki-webdav-plugin was developed to provide means to calculate webdav urls
of xwiki documents and attachments. But as it turns out, for a given
document or an attachment, calculating webdav url is simply a string
manipulation operation. It doesn't require any support from the webdav
component (and thus the webdav plugin). So, I'm thinking we should remove
the webdav plugin completely.
WDYT ?
Thanks.
- Asiri
Hi,
There is a major discrepancy in the way the programming rights are
evaluated for the access to the privileged API from velocity compare
to the right to execute groovy code.
In the API, the right to access a priviledged API is evaluated against
the sdoc in the current context, which is the document that contains
the currently executed velocity code. Here is the code extracted from
the current RightService implementation that does so:
public boolean hasProgrammingRights(XWikiContext context)
{
XWikiDocument sdoc = (XWikiDocument) context.get("sdoc");
if (sdoc == null)
sdoc = context.getDoc();
return hasProgrammingRights(sdoc, context);
}
In the XWikiGroovyRenderer, the right to render a groovy code is
evaluated against the content doc, which is the top-level document
that have directly or indirectly included the document that contains
the evaluated groovy code. Here the excerpt from the code that made
this check:
if (!
context.getWiki().getRightService().hasProgrammingRights(contentdoc,
context)) {
return content;
}
These are two opposed methods.
As a direct consequence, a Sheet document (ie XWikiUserSheet) may be
developed in velocity and use the priviledged API, but it is
impossible to put a single line of Groovy in it, since the "instance"
of the template documents that include the Sheet are usually created
by users without programming rights.
The debate is now, what is the original or current intend. Do you want:
1) to have a very secure situation, and therefore change
hasProgrammingRights() implementation (why do we have a sdoc than?), or
2) to have a flexible, more responsible solution, were user having
programming rights should take care not writing risky code, and
therefore change the groovy renderer to accept groovy code based on
contextdoc (sdoc in that place), so groovy and velocity gets the same
access level ?
The current situation is for me not acceptable, since it is a mix of
1) and 2) depending on the language, and there is no reason that the
language influence the rights you have.
Thanks for your comments...
Denis
--
Denis Gervalle
SOFTEC sa
http://www.softec.st
hi,All
I have a question about the xwiki table usage.When I have a table ,I try
to split one cell to more cells or merge some cells into one cell,I find
this function is invalid,when I click the save button ,the saved table is
not what I want,is anybody know how to solve this issue?or anyone who has
the same problem?is xwiki support this syntax?if so,who has the xwiki script
for the table?
Thanks
Steven
Hi !
I installed recently the 1.6 version of XWiki and found that a new
functionnality allows to add logical groups to the group members list :
great ! This recursive inclusion is properly managed in global rights (like
allowing a global group for accessing a space) but this does not seem to be
included in velo xwiki API. For example I have one client that belongs to
its company group XWiki.MyClientCompanyGroup that I added to the group
XWiki.Clients. I have a dedicated part of my menu for clients and I tests
whether the current user is a client or not with the test
`$xwiki.user.isUserInGroup("XWiki.Clients")` which always return false :(
Maybe a special macro could help... anyone has a clue ?
--
Hoani CROSS
Globotraders Tahiti Founder [http://globotraders-tahiti.com]
Dear all,
I would like to release XEclipse soon. During the last days I've fixed
many of the outstanding bugs thanks also to the effective collaboration
of Eduard Moraru who provided a lot of patches and I think that now
XEclipse is pretty stable, usable and functional.
This release incorporates a lot of features that have been implemented
during the last months. In particular a better integration with the
Eclipse ecosystem, object editor, translation management, page history,
syntax highlighting and completion (thanks to the work done by Malaka
during the last GSoC and Venkatesh) and many other improvements.
There are still many things to do but I think that it's a priority to
have a new version out asap in order to have more feedbacks from users
and to build upon it.
Due to some previous engagements I think I will be able to release it on
Friday or during the weekend.
WDYT?
-Fabio
____________________________________
Hello everybody,
Right now with Skin eXtensions we can hit "jsx" and "ssx" action URLs to
retrieve JavaScript and CSS files, that either comes from XObjects (the
original SX idea) or from JAR resources (since SX 1.3). I'm playing with
the idea of adding a new action that would return text/html, possibly
available as JAR resource extension _only_. I explain below how I came
to this need.
This morning I gave some thoughts on the behavior the {{map}} macro
should have when the Internet is not reachable, but the wiki is (any
resemblance to actual facts is... not coincidental;)). The only
alternative I found not to have the macro blocking the whole page
loading until the HTTP requests gives up trying to get the library from
the service provider (google, yahoo, etc.) is to have it being loaded in
an iframe (and then display the map in that iframe, too). For this to
happen however, I need to pass an URL to that iframe which will give the
html needed (<script> tag for the library, <script> to consume the
library, and probably a <div> container for the map itself, etc.) Since
I work with the constraint of being to publish the macro as a jar only,
I would like this HMTL file to be a resource file of my macro
(otherwise, a simple wiki page called with "xpage=plain" would do the
trick, obviously).
Since ssx and jsx returns content type according to the resource they
provide, we cannot use them for this purpose (and it would be hackish
use anyway). Thus I propose we add a new action "hsx" for example, to
return "text/html" content. This extension does not have much sense as a
document extension (since we don't know where to inject the result,
unless the new XClass has a "extension point" field but then it becomes
somehow an IX class), so I it would be available only as JAR resource
extension.
Did I miss something there ?
Sergiu what do you think ? Others ?
Jerome.
Hi,
One refactoring idea: we could probably replace our Block
implementation by the JCR API.
This would allow to benefit from already existing code (JackRabbit for
ex) such as the query manager, navigation in the XDOM tree, etc.
Pros:
- less code to write
- more powerful API
- standard-based
Cons:
- Need to rewrite our Block implementation
- more external dependencies (jackrabbit and what it draws)
WDYT?
Thanks
-Vincent
You will like it.
Click to find out why
Please accept or reject this invitation by clicking below:
http://www.bebo.com/in/8210404647a433319350b135
......................................................................
This email was sent to you at the direct request of Arvind Gupta <arvind.bernaulli(a)gmail.com>. You have not been added to a mailing list.
If you would prefer not to receive invitations from ANY Bebo members please click here - http://www.bebo.com/unsub/8210404647a433319350
Bebo, Inc., 795 Folsom St, 6th Floor, San Francisco, CA 94107, USA.
Hello Devs,
In webdav view of xwiki, a wiki page (document) is represented by a webdav
collection resource (somewhat like a directory). This collection resource is
comprised of wiki.txt, wiki.xml, attachments & child pages. As you can see,
there is a mismatch between these two concepts: without wiki.txt & wiki.xml
(children), a wiki page (the parent) cannot exist, but this is the otherway
around in a filesystem like structure.
So a question arises, in webdav space what does it mean to delete a wiki
page ?
1. We can treat deleting wiki.txt and/or wiki.xml as deleting the whole
page. But deleting the parent when only the children were asked to be
deleted is wrong. And most of the webdav clients go insane.
2. We can prohibit wiki.txt & wiki.xml being deleted (i.e. throw an
exception). But this would prohibit users from deleting pages on some
clients because they use a recursive delete procedure (i.e. delete children
first and then delete the parent). When deleting children fails, the whole
operation fails.
3. We can trick the clients to believe that the delete was a success
(repport AOK) and continue like nothing went wrong. This way a recursive
delete will succeed. But if the client re-request the resource just to make
sure the delete was a success, the deleted wiki.xml and wiki.txt will appear
again. Some clients indeed behave like this and they fail. This is the
current approach we have taken.
4. We can put wiki.txt & wiki.xml inside the HttpSession object the first
time we initialize a particular resource. This way, the deletes will succeed
and even if the client re-validates the delete, those child resources will
not appear again. This approach seems to prevent all of the above mentioned
problems but again, we're using a hack. Also, this implies that a page (dav
collection) can exist without wiki.txt & wiki.xml.
I voting in favour of the 4th option. Do you see any other way to get around
this mismatch ?
Thanks.
- Asiri
[image: Asiri Rathnayake's Facebook
profile]<http://www.facebook.com/people/Asiri_Rathnayake/534607921>