Hi folks,
I decided to do an analysis of how exactly how permissions are checked so we might be able to brainstorm about where there is low hanging fruit in the optimization field.
0. Check that the user is authenticated
1. If user is attempting write operation on read-only wiki, return false.
2. If user is a guest, get XWikiPreferences for the wiki and do 4 checks on the XWikiPreferences object to see if the action requires authentication.
3. If action is "delete", get current document and check if current user created that document.
4. Determine if user is superadmin
5. Load DocumentReferenceResolver from ComponentManager and normalize username then compare to superadmin name.
6. Check for admin status on main wiki
7. Get XWikiPreferences document for main wiki
8. scan for rights objects which allow the user "admin" permission, return if found
9. get list of all groups which user is a member of, repeat check for each, returning if found
10. If checking permission for "programming" then repeat from step 7 with "programming" as permission.
11. If the permission being checked is programming, return the result even if false.
12. If in a subwiki, get the XWikiServer<subwiki> document and return if the current user is the owner.
13. If checking for register permission then get the XWikiPreferences document for the wiki.
14. scan for rights objects which allow the user, return if found
15. get list of all groups which user is a member of, repeat check for each, returning if found
16. if no match is found, check for deny rights on the user, return if found
17. get list of all groups which user is a member of, repeat check for each, returning if found
18. Get XWikiPreferences from wiki
19. scan for rights objects which allow the user "admin" permission, return if found.
20. get list of all groups which user is a member of, repeat check for each, returning if found
21. Get WebPreferences for space where document to check permissions resides.
22. scan for rights objects which allow the user "admin" permission, return if found.
23. get list of all groups which user is a member of, repeat check for each, returning if found
24. if the WebPreferences document specifies another Space as it's "parent" then change to that space and return to step 21.
25. Get the document to check permission on
26. scan for rights objects which deny the user, return if found
27. get list of all groups which user is a member of, repeat check for each, returning if found
28. scan for rights objects which allow the user, return if found
29. get list of all groups which user is a member of, repeat check for each, returning if found
30. Get the WebPreferences document for the space where document to check permission on resides.
31. scan for rights objects which deny the user, return if found
32. get list of all groups which user is a member of, repeat check for each, returning if found.
33. scan for rights objects which allow the user, return if found
34. get list of all groups which user is a member of, repeat check for each, returning if found.
35. Check if the WebPreferences document specifies a "parent space" if so, change to that space and go to step 30.
36. Get the XWikiPreferences document for the wiki
37. scan for rights objects which deny the user, return if found
38. get list of all groups which user is a member of, repeat check for each, returning if found
39. scan for rights objects which allow the user, return if found
40. get list of all groups which user is a member of, repeat check for each, returning if found
Getting the list of groups which a user is a member of is relatively quick after the first run since it is cached.
After being loaded it is stored in the context but the implementation doesn't trust the version in the context and gets it again. It could be a little bit faster by just assuming that anything in the context is still valid.
If a group is a member of another group then the checks will be repeated twice each time.
One thought that comes to mind is that we could check permissions on a user and all of the groups of which he is a member, simultaneously. This would reduce the number of scans over the list of objects in a given document and since this is such a hotspot, I think optimizations like this might pay off.
Documents which need to be loaded include:
The XWikiPreferences for the main wiki
XWikiPreferences for the subwiki (if applicable)
WebPreferences for the space (and any space listed as a parent)
The document which we are checking
If in a subwiki, the XWikiServer<subwiki> document
This means that for any level of performance, all documents other than the document we are checking need to remain in the cache perpetually.
Since the vast majority of checks are for things like view and edit with users who don't have programming or admin permission, it makes sense to rearrange the checks so that the most common path executes the fastest. This would be a major project though.
Any other thoughts and ideas about it?
Caleb
Hi Guys,
I need a fresh install of XWiki Enterprise Manager.
Drop me a line if you're interested.
David Henry**
*Modus*
* *
18 Exchange Street Upper, Temple Bar, Dublin 2, Ireland.
e: david(a)modus.ie | p: +353 1 677 9745 | m: +353 86 8300943
Hi devs,
I`m asking for your vote to include Workspaces [1] as an extension that
would be part of the platform for the 3.2 release. It would not be bundled
by default just yet, but it will be available to install as an extension
using the Extension Manager.
This vote is related to the previous mail [2] I`ve sent about adding some
hooks in the platform specifically for the workspaces feature. The
difference now is that the feature would no longer be an "external"
extension but it would be supported and maintained by XWiki devs (me). This
means accepting the technical debt introduced by the proposed hooks [2] and
making it my responsibility to maintain the application and ensure the hooks
are gradually removed from the platform when cleaner alternatives are
introduced.
An overview of the workspaces feature is available in the project preview
mail [3] (or even [4]) I`ve sent some time ago that included the live demo
[5].
Thanks,
Eduard
References:
-----------------
[1] https://github.com/xwiki-contrib/wiki30/tree/master/wiki30-social
[2] http://xwiki.markmail.org/thread/qgjcehgmqpkbwbvw
[3] http://xwiki.markmail.org/thread/223ensce4tikygmz
[4] http://xwiki.markmail.org/thread/3cbms22ctbqi4yqp
On Fri, Sep 16, 2011 at 18:04, Sergiu Dumitriu <sergiu(a)xwiki.com> wrote:
> On 09/16/2011 10:04 AM, Denis Gervalle wrote:
> > Hi Devs,
> >
> > Last database migrator is very old now, it was on revision 15428 of our
> SVN
> > repository.
> > The rules at that time was to use the revision number of the SVN commit
> for
> > the database version.
> > So our database is currently in version 15428.
> >
> > Since we have no more revision number in Git, and that the database
> version
> > should be an integer, we need to vote a new convention:
> >
> > A) continue from where we are, incrementing by one, so next version
> will be
> > 15428
> >
> > B) use 16000, or another round number for next revision and increments
> by
> > one for next version
> >
> > C) use a mix with the current XWiki version, so next will be 32000, and
> we
> > have room for 1000 versions during the 3.2 releases.
>
> D) Count the number of git commits on the trunk, with:
>
> git log --oneline | wc -l
>
> This would give a number equivalent to the SVN revision number.
>
Well, not really, it only follow a single branch of commits, while SVN
revision are global to a repository
> > Personally, since database changes are really rare, since we were already
> > jumping, and since there is plenty of room for number, I prefer meaning
> > full number and I prefer C. The major advantage is that the number is in
> the
> > database, so if you have a db dump, you may quickly know what is the
> oldest
> > version this dump is compatible with without needing some reference list.
> >
> > So my +1 for C.
>
> I prefer to use something more stable, and C) looks like the better
> option for me as well.
>
So we agree on C) with four +1, no 0, no -1
>
> --
> Sergiu Dumitriu
> http://purl.org/net/sergiu/
> _______________________________________________
> devs mailing list
> devs(a)xwiki.org
> http://lists.xwiki.org/mailman/listinfo/devs
>
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi devs,
After some analysis of XWiki startup, there is many issues in the way DB
schema is updated and migrators are executed (see XWIKI-1859, XWIKI-2075
and XWIKI-2066). There is no consistency between them, which really seems
not appropriate.
The schema is updated in many different places (unless disabled by config):
- when UpdateDatabase is called for the first time; this one being called
anytime we access a sub-wiki in a farm
- when CheckHibernate is called and no session factory exists, this one
being called in many places and before any transaction based requests
- before a migration of a sub-wiki db
The update schema function itself as some issue as well:
- it is a synchronized function and the config check is done after
synchronization
- a part from the schema update done by the generated hibernate script, it
does some updates on data in the database like a migrator, but it does so
even when there is no need for an updated oppposed to a migrator
- these updates are written in plain SQL, not HQL, and partly MySQL
oriented.
- it use a different parameter than migrator in xwiki.cfg to be disabled,
so it is possible to have migration without having the correct schema
On the other hand, migrator are launch only on initial startup opposed to
the lazy updating of schema. Since we use struts, it was no easy to have
them launch before the first request, but I do see this as an issue since
any admin will at least do such a request, and a request would not be
accepted earlier anyway. The issue with migrators are:
- migrators are executed on new XWiki database. In particular, when you
create a new wiki in a farm, no version are set in the DB, and the next time
the servlet container is restart, all migrator are executed on this new wiki
database.
- migrators may fail and the wiki start anyway (I have already a patch for
that)
- migrators could be disabled, but there is no guarantee that a database is
at the correct version for the running core. This is particularly annoying
for my migration, since old id could be kept, and document loading will
fails for mandatory document, so these will be recreated, producing many
duplicated document in the database, and therefore corrupting all accessed
wikis that have not been migrated.
So, I proposed:
1) to join together schema update and migration, using xwiki.store.migration
(default to 0) && xwiki.store.hibernate.updateschema (deprecated, default to
true already) && xwiki.store.migration.databases (default to main wiki only)
for enabling them
2) convert migration like requests done in schema update into an very old
migrator (before first one)
3) While still allowing migrators (and therefore schema update) to be
disabled, keep a lazy check on the database version required by the running
core.
a) fails to return a given wiki if its database is outdated
b) OR do a lazy update if migration is enabled
4) store the current DB version of the core in the database when creating
new wikis
5) consider a DB without version and without xwikidoc table to be empty and
at the current version
For this to be implemented, I need to know the current core DB version at
startup. I see some ways to do that:
A) store it in a constant in a static in XWiki class, putting
the responsibility on migration developer to update it
B) compute this value by instantiating all migrators to know there
resulting DB version
a) and store it in the XWiki singleton
b) and cache this in the migration manager, keeping it available from
the current XWiki singleton, for potential lazy migration
I am +1 for most of these, except 3)b) for which I am not sure it is fully
safe, and which implies B)b). I am undecided regard A) or B)a), with a
preference for the second, but wondering if it should
consider xwiki.store.migration.ignored or not (probably not).
WDYT ?
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi Devs,
Since I fall on this well-know XWiki caveat recently, I would like to
improve this.
Currently, XWikiDocument.getId() is almost equivalent to
String.hashcode(fullName:lang). Since this is a really poor hashing method
for small changes, the risk that two documents with similar names of same
length are colliding is really high. This Id is used by the Hibernate store
for document unicity and really needs to be unique, and at most a
64bit-numeric at the same time. Currently we use only 32bits since java
hashcode are limited to 32bits integers.
The ideal would be not to have this link between ids and documents
fullName:lang, but converting the current implementation is not really easy.
This is probably why XWIKI-4396 has never been fixed. Therefore, my current
goal is to reduce the likeliness of a collision by choosing a better hashing
method and taking into account the fact that document fullname are short
string and the number of unique ids required are very limited (since the
unicity is only expected for a given XWiki database) compared to the 64bits
integrer range.
So we need to choose a better algorithm, and here are IMHO the potential
options:
A) use a simple but more efficient non-cryptographic hashing function that
runs on 64bits, I was thinking about using the algorithm produced by
Professor Daniel J. Bernstein (DJB) since it is well-know, wildly used, easy
to implement algorithm with a good distribution on small strings.
Pro: no dependency; fast; 64bits better than hashcode
Cons: probably more risk of collision compare to MD5 or SHA, but far less
than now; require db migration of all document keys
B) use an MD5 or even stronger SHA1 or SHA256 algorithm from JCA, truncating
to the lower 64bits. Note that oldcore already use MDA5 for hashcoding a
whole XWikiDocument to provide the API with a getVersionHashcode(), and for
the validation hash used by the persistent login manager. The first use
Object.hashcode() as a fallback, which is really bad and defeat the purpose.
The second does not provide any fallback and may fail unexpectedly. For our
case, if we really want a fallback, we needs to store the hashing algorithm
used in the database at creation time, and anyway, fail when it was not
available.
Pro: already used in oldcore, probably less collision; with fallback, really
flexible since it would be possible to choose the algorithm at creation time
and does not require full migration for existing database.
Cons: require at least a DB schema change to add the hashing algorithm,
probably as a column of xwikidbversion; if this config value is broken, the
whole DB is broken
C) use our own MD5 implementation when the JCA provider is missing it. I was
thinking about integrating something like
http://twmacinta.com/myjava/fast_md5.php (non-native version) which is LGPL.
This will ensure availability of the hashing algorythm while having a rather
strong one.
Pro: no dependency, could also provide MD5 to getVersionHashcode and the
persistent manager
Cons: require db migration of all document keys
A) is really quick to implement, simple, and the less risky, but some may
found it insufficient. Caleb ?
Obviously, B) with fallback is really nice, but I wonder if it is not
overkill ?
I am worried about the B) without fallback, but maybe I want it too flawless
C) is rather solid, while staying simple, but maybe overkill too.
I am really undecided.
WDYT ?
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi devs,
As part for the 3.2 Roadmap, the plan for the workspaces feature was to add
some hooks into the platform that could accept a workspaces extension if an
admin decided to install it.
Without adding these hooks, there currently isn`t any mechanism (like
Interface Extensions, but not limited to that) that allows a simple
application to modify whatever it wishes (like user profile sections,
administration sections, top menu, etc.) so I went ahead and added some code
into the platform that executes only when the workspaces extension (wiki
pages and component/service) is installed.
I`ve created http://jira.xwiki.org/browse/XWIKI-6991 with some details about
what I have done and made a pull request at
https://github.com/xwiki/xwiki-platform/pull/24 since I did not want to rush
at applying the changes without running them by you guys.
I`ve broken the issue down to subtasks with separate commits to make the
review easier.
There currently is a demo server for the workspaces feature at
http://wiki30-demo.xwiki.com but I will have to update it tomorrow with the
latest version. Not much changed, you can see the visible changes in the
specific jira subtasks (screenshots).
The goal would be for this to make it into 3.2 so that people could then
install (the soon to be released) workspaces extension and try it out.
Please take some time, if possible, to look over the proposed changes and
spot any problems.
Thanks,
Eduard
Hello,
Using XEM 3.1 only in one of the wikis (others are fine) i not get the
Activity stream working, in log it shows this error everytime i go to the
webhome page:
ERROR o.x.v.i.DefaultVelocityEngine - Left side ($events.size()) of '>'
operation has null value at unknown namespace[line 820, column 22]
If i remove the the Activity stream gadget it stop to show this error. I try
to reinstall Activity and the XE 3.1.xar but the same result.
Best Regards,
Ivan
--
View this message in context: http://xwiki.475771.n2.nabble.com/Activity-Stream-not-display-activity-tp68…
Sent from the XWiki- Dev mailing list archive at Nabble.com.
3 +1 and one 0, done.
On Tue, Sep 20, 2011 at 2:32 PM, Thomas Mortagne
<thomas.mortagne(a)xwiki.com> wrote:
> Hi devs,
>
> We never use it AFAIK so I propose to remove it from default XE distribution.
>
> XE is pretty big right now so would be cool to reduce its size when possible.
>
> WDYT ?
>
> --
> Thomas Mortagne
>
--
Thomas Mortagne
Hi devs,
We never use it AFAIK so I propose to remove it from default XE distribution.
XE is pretty big right now so would be cool to reduce its size when possible.
WDYT ?
--
Thomas Mortagne
Hi devs,
We're doing bad with our release schedule for 3.2 ATM. We're already late by 1 week and we're still lagging. Outstanding blockers:
- recent discussions with new Sheet module strategy for page naming conventions. (Owner: Marius)
- default permanent storage directory location to finish (Owner: Thomas?)
- lucene improvements not committed yet (Owner: Sergiu)
- failing functional tests to fix (Owner: everyone)
Thus I propose that:
- we don't add new stuff to master except bug fixes and commits related to the issues above
- we push the 3.2M3 release to Monday 26 Sep. This means being ready to release this Thursday 22nd so that we can be sure we'll release on Monday 26.
- we push the 3.2RC1 release to Monday 3rd of October (one week after M3 only)
- 3.2Final stays on the 10th of October.
WDYT?
Thanks
-Vincent
Hi devs,
Thomas just told me that he's made a change for Extension Manager (apparently there was a vote for it and I missed it - I can't find it so if anyone has the link please point me to it) and that by default now the Extension Manager uses the temporary directory to store installed extensions (Before it was using ~/.xwiki).
I thus want to throw my -1 to release 3.2 final with this (I'd also much prefer if 3.2M3 doesn't have it as much as possible). The reason is that the tmp/ directory can get wiped anytime and the user can thus suddenly loose all its installed extensions. I believe we need a permanent location for that.
We have 2 general options IMO:
1) Don't start xwiki if the work directory is not explicitly configured
2) Make the default EM work directory be the same as before (ie ~/.xwiki), when the work dir config property is not defined
I also want to propose that for the standalone distribution of XE (the jetty/hsqldb package) we use work/ as the work directory. We already create this directory and we should use it (it's already used by our lucene indexing BTW).
WDYT?
Thanks
-Vincent
Hi devs,
Here's the sheet resolution algorithm I have implemented in
https://github.com/xwiki/xwiki-platform/blob/cbad868094b9cc9d811621670e3205…
.
getSheets(document, action) : List<DocumentReference>
(1) If there is a "sheet" request parameter that points to a sheet
that supports the passed action, return a reference to that sheet.
(2) For each object of type XWiki.DocumentSheetBinding attached to the
passed document check if the "sheet" property points to a sheet that
supports the passed action. Return the computed list, if not empty.
(3) For each type of object attached to the passed document:
(3.1) For each object of type XWiki.ClassSheetBinding attached to
the document defining the class check if the "sheet" property points
to a sheet that supports the passed action.
(3.2) If the class doesn't have any XWiki.ClassSheetBinding object
then check if <ClassName><ActionName>Sheet exists
(3.3) If not, then check if <ClassName>Sheet exists and supports
the passed action
(3.4) If not, then check if "sheet.defaultClassSheetBinding"
configuration property specifies a sheet for the class we're looking
at
Return the computed list if not empty.
(4) If the passed document is a class (holds a class definition) then
return a reference to the default class sheet (XWiki.ClassSheet) if it
supports the passed action.
Otherwise return an empty list.
I'd like you to vote on each of the steps of the algorithm. Are there
steps you think I should drop?
I'm obviously +1 for each of the steps.
Thanks,
Marius
Hi devs,
One of the features of the new sheet system is that it can
automatically detect and apply sheets. For instance, if you have:
Space.YourClass (no objects)
Space.YourClassSheet (no objects)
Space.YourDocument (with an object of type YourClass and no include
macro call to YourSheet in the content)
YourDocument will be automatically displayed with YourSheet. This
introduces a backward compatibility problem because the new sheet
system will be triggered for existing XWiki applications that haven't
been updated to use the new system. One way to prevent this is to
trigger the new system only if XWikDocumenti#getDefaultEditMode()
returns "edit" (it should return "inline" for current XWiki
applications). XWikiDocument#getDefaultEditMode() uses two classes,
EditModeClass and SheetClass, to determine the default edit mode.
EditModeClass was recently introduced so most of the current XWiki
applications are still using the deprecated SheetClass.
In the new sheet system I needed a class to describe a sheet and I
opted for reusing the name of an existing class, SheetClass, because
IMO it is the best name for a class describing a sheet. As a
consequence the current implementation of
XWikiDocument#getDefaultEditMode() will return "inline" if you try to
include a sheet in the content of a document. This limits the new
sheet system because you should be able to do anything with the
document content, even including another sheet.
I talked with Vincent about this and we found three solutions:
(1) Change XWikiDocument#getDefaultEditMode() to use only the
EditModeClass. The breaks backward compatibility because current
applications have to update their sheets to use EditModeClass instead
of SheetClass (if they want to keep the old behavior)
(2) Use XWikiDocument#getDefaultEditMode() only if a backward
compatibility configuration flag is set. This means that the
limitation I mentioned above (can't include a sheet in the document
content) will be true only if you want backward compatibility.
(3) Use a different name for the class describing the sheet, e.g.
SheetDescriptorClass. This way XWikiDocument#getDefaultEditMode() will
keep returning "inline" for applications that are still using the
deprecated SheetClass.
I'm going to implement (3) if no one is against it.
Thanks,
Marius
10 +1 and nobody against it, vote pass.
Welcome aboard Eduard !
On Wed, Sep 14, 2011 at 4:44 PM, Thomas Mortagne
<thomas.mortagne(a)xwiki.com> wrote:
> Hi committers,
>
> I would like to propose a new committer on https://github.com/xwiki/
>
> Eduard first officially contacted the XWiki community trough GSoC in
> 2008 on an XEclipse (https://github.com/xwiki/xwiki-eclipse) project.
> After having done a great job on XEClipse he became an employee while
> he was doing his 1st year master studies.
>
> He is an SAS employee for almost 3 years now and worked in the
> Research Team developing 3 open source research projects focused on
> XWiki: Concerto (https://github.com/xwiki-contrib/concerto), Scribo
> (https://bitbucket.org/scribo/scribo) and now Wiki3.0
> (https://github.com/xwiki-contrib/wiki30).
>
> He was a GSoC co-mentor in 2010 and 2011 for XEclipse and Android
> client projects.
>
> - Eduard is very active on the mailing list and answer efficiently
> - Eduard worked and will work a lot on wiki manager and XEM which
> really need some love
> - Eduard sent a lot of good pull requests theses days and I'm tired of
> applying them ;)
>
> Some stats:
>
> - Mails sent by Eduard (121):
> http://xwiki.markmail.org/search/?q=from%3AEduard%20Moraru
>
> - Eduard's github profile
> https://github.com/Enygma2002
>
> - Eduard's Masterbranch profile
> https://www.masterbranch.com/developer/Enygma
>
> - Eduard's ohloh profile
> https://www.ohloh.net/accounts/Enygma
>
> I will add that it's very pleasent to work with Edy.
>
> I'm +1 to make Edy a committer.
>
> --
> Thomas Mortagne
>
--
Thomas Mortagne
5 +1 and no other votes, done.
On Fri, Sep 16, 2011 at 5:47 PM, Thomas Mortagne
<thomas.mortagne(a)xwiki.com> wrote:
> Hi devs,
>
> Right now it's pretty dangerous to manipulate script services from Velocity.
>
> If even you try to get a service that does not exist that will produce
> an exception which will kill your script for good. It's a pain to have
> use case like using a service only if it's here and fallback on
> something else otherwise.
>
> For example even if you disabled CSRF XWiki will fill in many places
> if you even think or removing the csrf jar. That seems pretty bad to
> me.
>
> The exception does not bring much value to scripts here (even for
> scripts supporting it) so I proposed to return null instead of
> throwing an exception.
>
> WDYT ?
>
> Here is my +1.
>
> --
> Thomas Mortagne
>
--
Thomas Mortagne
Hi all,
Some time ago we discussed [1] a proposal
http://incubator.myxwiki.org/xwiki/bin/view/Improvements/XWikiOrgProposal2
about changing the way our community website (www.xwiki.org) looks like
(improved homepage, improved navigation, new logo [2], new skin, community
wiki [3], etc.).
Since then there have been some small collaborative efforts to make this new
site come true and I want to thank for their help to Sergiu Dumitriu, Marta
Girdea, Jean-Vincent Drean, Raluca Stavro, Vincent Massol, Silvia Rusu,
Raluca Moisa, Stefan Orzu and all the people that gave feedback.
In order to speed up the process we also created a development wiki
http://newxwiki.xwiki.org/
where you can log in with your xwiki.org credentials and work on the
improvements you want to make. After the work is finished it will be ported
to xwiki.org. This way you can experiment the way your code looks like and
behaves without interfering with the live site.
Also me and Silvia created a planning for the development of the new site
http://incubator.myxwiki.org/xwiki/bin/view/Improvements/XWikiOrgPlanning
and we have split it into 4 stages. We are currently in Stage 1 of the
development.
Each entry has it own link with more information/mockups/code about the
feature. Also features that have been started have also a JIRA issue
attached to them.
We would be very happy if the community could get involved in helping us
making this happen. We still need work on deciding the content for some
sections, we need better design proposals for some elements and we need lots
of implementation work to make everything a reality.
If you want to participate you should pick something from the planning and
announce it on this thread so that we know what feature is taking care of.
Thank you,
Caty
References:
[1] [Proposal] XWiki.org horizontal navigation + home page
http://markmail.org/thread/tfmrludhw2yh5tcn
[2] [Proposal] XWiki.org Logo Challenge - Round 2
http://xwiki.markmail.org/thread/pkdd5kijpt2yqeph
[3] [Proposal] XWiki.org Community Page
http://markmail.org/thread/b3pctp2kepcprfaf
Hi devs,
Right now it's pretty dangerous to manipulate script services from Velocity.
If even you try to get a service that does not exist that will produce
an exception which will kill your script for good. It's a pain to have
use case like using a service only if it's here and fallback on
something else otherwise.
For example even if you disabled CSRF XWiki will fill in many places
if you even think or removing the csrf jar. That seems pretty bad to
me.
The exception does not bring much value to scripts here (even for
scripts supporting it) so I proposed to return null instead of
throwing an exception.
WDYT ?
Here is my +1.
--
Thomas Mortagne
Hi Devs,
Last database migrator is very old now, it was on revision 15428 of our SVN
repository.
The rules at that time was to use the revision number of the SVN commit for
the database version.
So our database is currently in version 15428.
Since we have no more revision number in Git, and that the database version
should be an integer, we need to vote a new convention:
A) continue from where we are, incrementing by one, so next version will be
15428
B) use 16000, or another round number for next revision and increments by
one for next version
C) use a mix with the current XWiki version, so next will be 32000, and we
have room for 1000 versions during the 3.2 releases.
Personally, since database changes are really rare, since we were already
jumping, and since there is plenty of room for number, I prefer meaning
full number and I prefer C. The major advantage is that the number is in the
database, so if you have a db dump, you may quickly know what is the oldest
version this dump is compatible with without needing some reference list.
So my +1 for C.
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi,
I'm proposing to move the following modules from xwiki-platform-core to separate git repos in a xwiki-contrib organization on GitHub:
xwiki-platform-calendar
xwiki-platform-exo
xwiki-platform-adwords
xwiki-platform-alexa
xwiki-platform-photoalbum
xwiki-platform-s5
xwiki-platform-workstream
Rationale:
* They're no longer working or supported
* We can move them back if the xwiki dev team wants to support them again in the future
* It's cleaner than having a retired module in the xwiki organization since a) it's not "polluting" the list of repos supported by the xwiki dev team and b) it allows them to be separated in repos
Future:
* Also move modules currently in svn contrib to xwiki-contrib org. Note that we need to verify if the svn app works with the GitHub svn integration too since several users of svn contrib are using it.
Here's my +1
Thanks
-Vincent
Hi (especially Sergiu),
I think you worked on removing the dependency on Albatross, right?
Is it finished? Can can we move the Albatross skin to contrib/retried?
See http://xwiki.markmail.org/thread/jx5vyqtrwwaidfka
Thanks
-Vincent
Hi devs,
I propose to expose XWikiContext.getUserReference() to the web API
(api.Context), specifically:
/**
* Returns the document reference for the profile page of the current
user which made the request. If there's no
* currently logged in user in XWiki then the returned reference is for
<i>XWiki.XWikiGuest</i> which represents any
* anonymous user. The returned reference can always be considered an
absolute document reference, meaning that
* <code>getUserReference().getWikiReference().getName()</code> will
always return the name of the user's wiki.
*
* @return The document reference for the current user which made the
request.
*/
public DocumentReference getUserReference()
{
return getXWikiContext().getUserReference();
}
Some more details are available at http://jira.xwiki.org/browse/XWIKI-6964
A pull request is available at
https://github.com/xwiki/xwiki-platform/pull/23
Here's my +1
Thanks,
Eduard
Hi devs,
Just seen this in Jenkins:
https://skitch.com/e-vmassol/f3rhr/update-center-jenkins
Basically each page has a "Localize this page" button at the bottom and when clicked you have a dialog box to provide translations in all languages for the strings to translate on the page.
I found this nice and easy for users since:
- they have the context (they can see the rendered page) when performing the translation
- it's easy and quick to use (no need to leave their application)
I haven't thought about it but an extension to do this in XE could be great.
I don't know how we could autodiscover the strings to translate though but it's probably possible.
Just an idea.
-Vincent