On Fri, Sep 16, 2011 at 18:04, Sergiu Dumitriu <sergiu(a)xwiki.com> wrote:
> On 09/16/2011 10:04 AM, Denis Gervalle wrote:
> > Hi Devs,
> >
> > Last database migrator is very old now, it was on revision 15428 of our
> SVN
> > repository.
> > The rules at that time was to use the revision number of the SVN commit
> for
> > the database version.
> > So our database is currently in version 15428.
> >
> > Since we have no more revision number in Git, and that the database
> version
> > should be an integer, we need to vote a new convention:
> >
> > A) continue from where we are, incrementing by one, so next version
> will be
> > 15428
> >
> > B) use 16000, or another round number for next revision and increments
> by
> > one for next version
> >
> > C) use a mix with the current XWiki version, so next will be 32000, and
> we
> > have room for 1000 versions during the 3.2 releases.
>
> D) Count the number of git commits on the trunk, with:
>
> git log --oneline | wc -l
>
> This would give a number equivalent to the SVN revision number.
>
Well, not really, it only follow a single branch of commits, while SVN
revision are global to a repository
> > Personally, since database changes are really rare, since we were already
> > jumping, and since there is plenty of room for number, I prefer meaning
> > full number and I prefer C. The major advantage is that the number is in
> the
> > database, so if you have a db dump, you may quickly know what is the
> oldest
> > version this dump is compatible with without needing some reference list.
> >
> > So my +1 for C.
>
> I prefer to use something more stable, and C) looks like the better
> option for me as well.
>
So we agree on C) with four +1, no 0, no -1
>
> --
> Sergiu Dumitriu
> http://purl.org/net/sergiu/
> _______________________________________________
> devs mailing list
> devs(a)xwiki.org
> http://lists.xwiki.org/mailman/listinfo/devs
>
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi devs,
After some analysis of XWiki startup, there is many issues in the way DB
schema is updated and migrators are executed (see XWIKI-1859, XWIKI-2075
and XWIKI-2066). There is no consistency between them, which really seems
not appropriate.
The schema is updated in many different places (unless disabled by config):
- when UpdateDatabase is called for the first time; this one being called
anytime we access a sub-wiki in a farm
- when CheckHibernate is called and no session factory exists, this one
being called in many places and before any transaction based requests
- before a migration of a sub-wiki db
The update schema function itself as some issue as well:
- it is a synchronized function and the config check is done after
synchronization
- a part from the schema update done by the generated hibernate script, it
does some updates on data in the database like a migrator, but it does so
even when there is no need for an updated oppposed to a migrator
- these updates are written in plain SQL, not HQL, and partly MySQL
oriented.
- it use a different parameter than migrator in xwiki.cfg to be disabled,
so it is possible to have migration without having the correct schema
On the other hand, migrator are launch only on initial startup opposed to
the lazy updating of schema. Since we use struts, it was no easy to have
them launch before the first request, but I do see this as an issue since
any admin will at least do such a request, and a request would not be
accepted earlier anyway. The issue with migrators are:
- migrators are executed on new XWiki database. In particular, when you
create a new wiki in a farm, no version are set in the DB, and the next time
the servlet container is restart, all migrator are executed on this new wiki
database.
- migrators may fail and the wiki start anyway (I have already a patch for
that)
- migrators could be disabled, but there is no guarantee that a database is
at the correct version for the running core. This is particularly annoying
for my migration, since old id could be kept, and document loading will
fails for mandatory document, so these will be recreated, producing many
duplicated document in the database, and therefore corrupting all accessed
wikis that have not been migrated.
So, I proposed:
1) to join together schema update and migration, using xwiki.store.migration
(default to 0) && xwiki.store.hibernate.updateschema (deprecated, default to
true already) && xwiki.store.migration.databases (default to main wiki only)
for enabling them
2) convert migration like requests done in schema update into an very old
migrator (before first one)
3) While still allowing migrators (and therefore schema update) to be
disabled, keep a lazy check on the database version required by the running
core.
a) fails to return a given wiki if its database is outdated
b) OR do a lazy update if migration is enabled
4) store the current DB version of the core in the database when creating
new wikis
5) consider a DB without version and without xwikidoc table to be empty and
at the current version
For this to be implemented, I need to know the current core DB version at
startup. I see some ways to do that:
A) store it in a constant in a static in XWiki class, putting
the responsibility on migration developer to update it
B) compute this value by instantiating all migrators to know there
resulting DB version
a) and store it in the XWiki singleton
b) and cache this in the migration manager, keeping it available from
the current XWiki singleton, for potential lazy migration
I am +1 for most of these, except 3)b) for which I am not sure it is fully
safe, and which implies B)b). I am undecided regard A) or B)a), with a
preference for the second, but wondering if it should
consider xwiki.store.migration.ignored or not (probably not).
WDYT ?
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi Devs,
Since I fall on this well-know XWiki caveat recently, I would like to
improve this.
Currently, XWikiDocument.getId() is almost equivalent to
String.hashcode(fullName:lang). Since this is a really poor hashing method
for small changes, the risk that two documents with similar names of same
length are colliding is really high. This Id is used by the Hibernate store
for document unicity and really needs to be unique, and at most a
64bit-numeric at the same time. Currently we use only 32bits since java
hashcode are limited to 32bits integers.
The ideal would be not to have this link between ids and documents
fullName:lang, but converting the current implementation is not really easy.
This is probably why XWIKI-4396 has never been fixed. Therefore, my current
goal is to reduce the likeliness of a collision by choosing a better hashing
method and taking into account the fact that document fullname are short
string and the number of unique ids required are very limited (since the
unicity is only expected for a given XWiki database) compared to the 64bits
integrer range.
So we need to choose a better algorithm, and here are IMHO the potential
options:
A) use a simple but more efficient non-cryptographic hashing function that
runs on 64bits, I was thinking about using the algorithm produced by
Professor Daniel J. Bernstein (DJB) since it is well-know, wildly used, easy
to implement algorithm with a good distribution on small strings.
Pro: no dependency; fast; 64bits better than hashcode
Cons: probably more risk of collision compare to MD5 or SHA, but far less
than now; require db migration of all document keys
B) use an MD5 or even stronger SHA1 or SHA256 algorithm from JCA, truncating
to the lower 64bits. Note that oldcore already use MDA5 for hashcoding a
whole XWikiDocument to provide the API with a getVersionHashcode(), and for
the validation hash used by the persistent login manager. The first use
Object.hashcode() as a fallback, which is really bad and defeat the purpose.
The second does not provide any fallback and may fail unexpectedly. For our
case, if we really want a fallback, we needs to store the hashing algorithm
used in the database at creation time, and anyway, fail when it was not
available.
Pro: already used in oldcore, probably less collision; with fallback, really
flexible since it would be possible to choose the algorithm at creation time
and does not require full migration for existing database.
Cons: require at least a DB schema change to add the hashing algorithm,
probably as a column of xwikidbversion; if this config value is broken, the
whole DB is broken
C) use our own MD5 implementation when the JCA provider is missing it. I was
thinking about integrating something like
http://twmacinta.com/myjava/fast_md5.php (non-native version) which is LGPL.
This will ensure availability of the hashing algorythm while having a rather
strong one.
Pro: no dependency, could also provide MD5 to getVersionHashcode and the
persistent manager
Cons: require db migration of all document keys
A) is really quick to implement, simple, and the less risky, but some may
found it insufficient. Caleb ?
Obviously, B) with fallback is really nice, but I wonder if it is not
overkill ?
I am worried about the B) without fallback, but maybe I want it too flawless
C) is rather solid, while staying simple, but maybe overkill too.
I am really undecided.
WDYT ?
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Hi devs,
As part for the 3.2 Roadmap, the plan for the workspaces feature was to add
some hooks into the platform that could accept a workspaces extension if an
admin decided to install it.
Without adding these hooks, there currently isn`t any mechanism (like
Interface Extensions, but not limited to that) that allows a simple
application to modify whatever it wishes (like user profile sections,
administration sections, top menu, etc.) so I went ahead and added some code
into the platform that executes only when the workspaces extension (wiki
pages and component/service) is installed.
I`ve created http://jira.xwiki.org/browse/XWIKI-6991 with some details about
what I have done and made a pull request at
https://github.com/xwiki/xwiki-platform/pull/24 since I did not want to rush
at applying the changes without running them by you guys.
I`ve broken the issue down to subtasks with separate commits to make the
review easier.
There currently is a demo server for the workspaces feature at
http://wiki30-demo.xwiki.com but I will have to update it tomorrow with the
latest version. Not much changed, you can see the visible changes in the
specific jira subtasks (screenshots).
The goal would be for this to make it into 3.2 so that people could then
install (the soon to be released) workspaces extension and try it out.
Please take some time, if possible, to look over the proposed changes and
spot any problems.
Thanks,
Eduard
Hello,
Using XEM 3.1 only in one of the wikis (others are fine) i not get the
Activity stream working, in log it shows this error everytime i go to the
webhome page:
ERROR o.x.v.i.DefaultVelocityEngine - Left side ($events.size()) of '>'
operation has null value at unknown namespace[line 820, column 22]
If i remove the the Activity stream gadget it stop to show this error. I try
to reinstall Activity and the XE 3.1.xar but the same result.
Best Regards,
Ivan
--
View this message in context: http://xwiki.475771.n2.nabble.com/Activity-Stream-not-display-activity-tp68…
Sent from the XWiki- Dev mailing list archive at Nabble.com.
3 +1 and one 0, done.
On Tue, Sep 20, 2011 at 2:32 PM, Thomas Mortagne
<thomas.mortagne(a)xwiki.com> wrote:
> Hi devs,
>
> We never use it AFAIK so I propose to remove it from default XE distribution.
>
> XE is pretty big right now so would be cool to reduce its size when possible.
>
> WDYT ?
>
> --
> Thomas Mortagne
>
--
Thomas Mortagne
Hi devs,
We never use it AFAIK so I propose to remove it from default XE distribution.
XE is pretty big right now so would be cool to reduce its size when possible.
WDYT ?
--
Thomas Mortagne
Hi devs,
We're doing bad with our release schedule for 3.2 ATM. We're already late by 1 week and we're still lagging. Outstanding blockers:
- recent discussions with new Sheet module strategy for page naming conventions. (Owner: Marius)
- default permanent storage directory location to finish (Owner: Thomas?)
- lucene improvements not committed yet (Owner: Sergiu)
- failing functional tests to fix (Owner: everyone)
Thus I propose that:
- we don't add new stuff to master except bug fixes and commits related to the issues above
- we push the 3.2M3 release to Monday 26 Sep. This means being ready to release this Thursday 22nd so that we can be sure we'll release on Monday 26.
- we push the 3.2RC1 release to Monday 3rd of October (one week after M3 only)
- 3.2Final stays on the 10th of October.
WDYT?
Thanks
-Vincent
Hi devs,
Thomas just told me that he's made a change for Extension Manager (apparently there was a vote for it and I missed it - I can't find it so if anyone has the link please point me to it) and that by default now the Extension Manager uses the temporary directory to store installed extensions (Before it was using ~/.xwiki).
I thus want to throw my -1 to release 3.2 final with this (I'd also much prefer if 3.2M3 doesn't have it as much as possible). The reason is that the tmp/ directory can get wiped anytime and the user can thus suddenly loose all its installed extensions. I believe we need a permanent location for that.
We have 2 general options IMO:
1) Don't start xwiki if the work directory is not explicitly configured
2) Make the default EM work directory be the same as before (ie ~/.xwiki), when the work dir config property is not defined
I also want to propose that for the standalone distribution of XE (the jetty/hsqldb package) we use work/ as the work directory. We already create this directory and we should use it (it's already used by our lucene indexing BTW).
WDYT?
Thanks
-Vincent