Hi devs,
I'd like to propose forking wikimodel (http://code.google.com/p/wikimodel/) and move its source code into our code base in XWiki Rendering as a separate module for now.
Motivation
========
* Wikimodel has been inactive for years (since 2009). Actually that's not quite true, there's been one developer working on it regularly, it's Thomas Mortagne…
* We heavily depend on wikimodel in the XWiki Rendering module (for our syntax parsers) which is a key module for XWiki
* It's more difficult for us to contribute to the wikimodel project since it means:
** committing in a different project with different rules
** there's nobody doing releases on the wikimodel projet and we need the releases to be synced with our releases since otherwise we cannot release on a SNAPSHOT dependency
** there's no community there so it's not fun and doesn't help for quality control/reviews/etc
** since we push XWiki Commons and XWiki Rendering to Maven Central we also need the wikimodel releases to be pushed to Central which is not happening now
* The wikimodel project has a different scope than our need. Mikhail (owner and admin of Wikiodel - not active since 2009 - some commits here and there) wanted it to remain only for wiki syntaxes. We added support for HTML parsing in it but Mikhail never liked it and wanted us to move it to XWiki.
* We have some impedance mistmatch between the wikimodel model and the xwiki rendering model which causes us to do some circumvolution in the code which leads to issues still beeing open in our JIRA (they've been opened for a long time now)
* We believe wikimodel would benefit from a larger and active community within the XWiki ecosystem. Wikimodel has been stagnating for years and we'd like it to live on and evolve.
Action Plan
=========
Thus Thomas and me are proposing to do the following:
* Move the sources in a new rendering module as is and use it as a library (same as now except we rename the module name and release it under the XWiki umbrella).
* Modify all header to put our LGPL headers everywhere
* We keep the attribution as is recommended by the ASL (see http://www.apache.org/foundation/license-faq.html#Distribute-changes) by adding a comment to all sources explaining where the source come from and in which license it was and who authored the initial code and how XWiki committers have participated to the wikimodel project. We also put that information in the NOTICE file.
* We modify the source code slowly over time to integrate it cleanly without our code and remove the hacks we had to do, and we bring improvements
* We post a mail on the wikimodel mailing list explaining all this and inviting the current wikimodel committers to become committers on the xwiki rendering module (provided they agree to follow our dev rules). We also explain how contributors can contribute (link to jira, link to github for pull requests, etc)
Related question (not part of the vote)
=============================
* We could decide to move XWiki Commons and XWiki Rendering under the ASL since they're libraries and as libraries the ASL is the license that makes it the easier possible to use from all other licenses. Right now ASL code cannot use our Rendering module because we're LGPL.
Here's my +1 to this plan.
I'm also currently +1 to brainstorm about moving XWiki Commons and XWiki Rendering to the ASL.
Thanks
-Vincent
Hi i'm new using a xwiki.
I'm trying to make a plugin that allows me to create users on the wiki.
but not where to start, I've read the documentation, but I can not connect
anything.
would appreciate your help.
--
View this message in context: http://xwiki.475771.n2.nabble.com/plugin-xwiki-tp7285335p7285335.html
Sent from the XWiki- Dev mailing list archive at Nabble.com.
The XWiki development team is proud to announce the availability of
XWiki Commons, XWiki Rendering, XWiki Platform, XWiki Enterprise and
XWiki Enterprise Manager 3.5 Milestone 1.
This release brings many improvements to the Extension Manager, a new
macro for displaying documents in a live table and quite a few bug
fixes. This is the only milestone of the 3.5 release, which is the
last release of the 3.x cycle. The next planned release is 3.5 final.
See the full release notes at
http://www.xwiki.org/xwiki/bin/view/ReleaseNotes/ReleaseNotesXWikiEnterpris…
for more details.
Thanks
-The XWiki dev team
Hi devs,
I know this subject will seems to you already voted and discussed in
http://xwiki.markmail.org/thread/fsd25bvft74xwgcx
But following the remarks and the discussion under that thread, I had
largely improved the proposed changes.
This is an important matter, so I prefer to resume here to be sure we all
really agree on this.
To resume the current situation, we have:
- document id
- used in document table, rcs, attachment...
- simple 32bits string hashcode of a locally serialized
document reference, including the language for translated documents
- stored in a 64bits field.
- object id
- used in object and property tables, but also in statistics tables
- simple 32bits string hashcode of the concatenation of the document
reference, the class reference and the object number
- stored in a 32bits field (except in Oracle, where the mapping is
32bits, but the storage is larger)
The vote is about:
- document id
- use the lower 64bits of an MD5 hashcode
- the base key for the hashcode is serialized using a
LocalUidEntityReferenceSerializer of the document reference
- the result is appended with the current locale for translated
document, until locale are integrated in references
- format for original document: 5space8document
- format for translation: 5:space8:document2:fr
- object id
- use the lower 64bits of an MD5 hashcode
- the base key for the hashcode is serialized using a
LocalUidEntityReferenceSerializer of the BaseObjectReference
- current format would be: 5:space8:document12:xspace.class[0]
- if my proposal in the object reference thread is adopted:
5:space8:document18:6:xspace5:class[0]
Since changing document id could really not helps since document reference
are used in object ids and therefore unambiguous document could
receive ambiguous objects, I do not advice to split the change. Moreover,
this is really sensible change in the database, so not multiplying them is
better. I think the upcoming 4 is a really good time to introduce this
change, so I propose to introduce this in version 40000 of the database
(4.0M1 release).
But I would like to use it internally earlier. So you would be pleased to
settle on this thread and the previous one before.
It implies the following migration for existing instances:
- customer custom mapping have to be adapted before the migration,
including dynamic one which could be not so easy, but this is already
rarely used and very rarely require any change in fact.
- change XWikiDocument to provide the key required for IDs, by the way,
also use that key (non local version) for the document cache
- refactor the BaseElement hierarchy to provide long IDs (no more
integer) based on references (generic way to have ids for any element)
- change the hibernate mapping for all object ids
- provide dynamic schema updates using liquibase to fix all object id
types, including those in custom mapping and collection
- migrate in HQL document id for persisted
class: XWikiDocument, XWikiRCSNodeInfo, XWikiLink, XWikiAttachment,
DeletedAttachment
- migrate in HQL object id for persisted class: BaseObject, *Property,
internal custom mapped class, dynamic custom mapped class
- migrate in HQL object id for custom statistics class derived form
XWikiStats
- migrate in SQL ids for all relational collections in the above migrated
tables
To provide this migration as safely as possible:
- Liquibase provide a safe way to change the schema
- All id conversion are gathered from the database in a first single
read-only transaction, and new id are computed.
- Potentially already migrated ids are detected, allowing the process to
fails and be restarted.
- Proceed to ids replacement using a safe algorithm that may support
non-circular conflict between old and new ids (very unlikely anyway, since
we move from 32 to 64bits)
- Use a single transaction for each id conversion, replacing it in all
related tables
- Use Database independent queries (HQL) as much as possible, only bulk
update on collection which are not supported by hibernate are in a
minimalistic SQL update statement.
Some helps for testing the migration on different
environments is requested ! (I do my tests on MySQL deeply)
I will commit my branch on platform soon.
Here is my +1.
--
Denis Gervalle
SOFTEC sa - CEO
eGuilde sarl - CTO
Currently all xwiki-platform submodules have in their pom.xml definition
the following:
<parent>
<groupId>org.xwiki.platform</groupId>
<artifactId>xwiki-platform-...</artifactId>
<version>3.5-SNAPSHOT</version>
</parent>
The problem I see is that the parent version is a hardcoded in every
submodule, thus making some some tasks very very difficult.
This is because Maven cannot read a placeholder value placed inside
<version> tag.
There seems to be some work on behalf of Maven guys for this issue:
http://jira.codehaus.org/browse/MNG-624 but ...
My proposal is to either:
1) remove the <version> tag in the parent definition from every child
module and let inheritance do it's job
2) use the <relativePath> *if the child can have access to it's parent pom
by relative path*
I haven't tested yet but if you remove it, will the sub module know from
which parent (which could have many versions) to inherit the <version>
property?
WDYT?
ing. Bogdan Flueras
Tel: +33666116067
Hi,
I'd like to switch filesystem attachments to begin using the persistent storage directory now instead of the work directory.
This means there's a new way of calculating where the attachments will be stored so it might fail on upgrade.
I would like to not do any migration and just add to the release notes because:
#1, it doesn't cause any permanent harm so long as nobody adds attachments while it's in what is an obviously broken state.
#2 administrators who have FS attachments enabled are probably going to know what's going on.
#3 migration code is scary, it requires lots of work and lots of review and even if it works,
people might feel violated having files shuffled around on their system without their permission.
WDYT?
Caleb
Hi devs,
I've been working on Jenkins Job generation (see http://jira.xwiki.org/jira/browse/XCOMMONS-87).
The idea is to have a maven project that you run and which generates Jenkins Job automatically and sets them on ci.xwiki.org
The pros are:
* Easy to create new jobs whenever we create a new branch
* Allows to manager our Jenkins configuration by storing it in our SCM
Proposal 1
=========
xwiki-commons/xwiki-commons-tools/
|_ xwiki-commons-tool-jenkins/
|_ xwiki-commons-tool-jenkins-base/
|_ xwiki-commons-tool-jenkins-commons/ <-- parent = xwiki-commons-tool-jenkins-base
xwiki-rendering/xwiki-rendering-tools/
|_ xwiki-rendering-tool-jenkins/ <-- parent = xwiki-commons-tool-jenkins-base
xwiki-platform/xwiki-platform-tools/
|_ xwiki-platform-tool-jenkins/ <-- parent = xwiki-commons-tool-jenkins-base
xwiki-enterprise/xwiki-enterprise-tools/
|_ xwiki-enterprise-tool-jenkins/ <-- parent = xwiki-commons-tool-jenkins-base
xwiki-manager/xwiki-manager-tools/
|_ xwiki-manager-tool-jenkins/ <-- parent = xwiki-commons-tool-jenkins-base
To run them, you go in each of the top level project's tool directory and run "mvn deploy". It creates and deploys the Jenkins Job, updating the jobs if they already exist.
Proposal 2
=========
Create a new Git repository called xwiki-jenkins in https://github.com/organizations/xwiki
Have a single pom.xml in there which generates all Jenkins jobs (it's possible to have various profiles if we want to generate jobs only for a subset of modules).
You run it using "mvn deploy" too.
Proposal 3
=========
Extends Proposal 2 but merges with the repo currently named "xwiki-debug-eclipse" . That repo would be renamed to "xwiki-tooling" or "xwiki-development-tools" or simply "xwiki-tools" and would have 2 directories at the top:
xwiki-debug-eclipse/
xwiki-jenkins/
Note that both xwiki-debug-eclipse and xwiki-jenkins should follow the same version and be in sync with the other top level repositories since they're tools for a given version of commons/rendering/platform/xe/manager.
If they're together it means they'll share the same JIRA project and will use 2 components to differentiate them, which is perfectly fine.
Conclusion
=========
I think I'm tempted a bit more by Proposal 2 and 3 because:
* It allows to generate all jobs in one go
* Jenkins is not really related to the rest of the build and thus it's not completely "normal" that its config should be mixed with building the runtime. For example when we deploy XWiki Commons to Maven Central it means users will see the Jenkins config there too with stuff that are only valid for xwiki.org.
* It makes simpler for maintenance to have it all in one POM together.
I'm undecided between Proposal 2 and 3.
WDYT?
Thanks
-Vincent
Hi,
I've looked a bit at the activity stream performance while looking for the
performance issue since 3.2+ (http://jira.xwiki.org/browse/XWIKI-7520).
Beyond this issue, I've been a bit puzzled by the logic of the activity
stream implementation.
Right now it seems the activity stream is generating many many queries on
the base data stored in the activity stream.
However I've not been able to identify the exact logic it is following as
it seems to be quite complex.
The whole point of the activity stream when it was initially implemented
was to move the work at saving time instead of having the work at display
time.
As the feature got more complex it seems we move away from that solution
and now we have again a huge amount of work at display time.
Now maybe the actual logic of what we want to display requires this, or
maybe not and we haven't gone in the right direction to implement this.
I think before we reimplement the activity stream in Java as I've seen said
in the feature survey, we should put the actual feature and logic on paper
and make sure we are going the right way.
Because otherwise reimplementing in Java won't solve anything.
I think it would be really good to go back to the initial objective of
having the effort at save time and then having the display only read data
in display it with simple templating.
Is there any documentation about the feature itself and about the logic ?
Can we put somebody on writing down the logic and then discussing that it's
the right thing to do ?
I can help on this if I'm given some more information about why it was done
the way it's done now.
Ludovic
--
Ludovic Dubost
Founder and CEO
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost