Hi devs,
I'd like to propose adding the Ivy JAR to our Platform/XE distributions.
The reason is simple: it makes it possible to use the Groovy @Grab directive which makes it real easy to extend XWiki using Groovy script.
Several advantages:
- not having to manually hunt for transitive dependencies
- not needing to stop/restart XWiki (since this is what you need to do if your groovy script needs to use some third party jars).
Here's a good usage example:
{{cache}}
{{groovy}}
@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.5.1')
import groovyx.net.http.RESTClient
github = new RESTClient( 'http://github.com/api/v2/json/' )
println "|=Project|=Description|=Use Wiki?|=Use Issues?"
def response = github.get( path : 'repos/show/xwiki' )
response.data.repositories.each() { repo ->
println "|[[${repo.name}>>http://github.com/xwiki/${repo.name}]]|${repo.description}|${repo.has_wiki}…"
}
{{/groovy}}
{{/cache}}
If you try to do this without the grab directive you'll find yourself hunting down more than 30 jars which you'll need to put in your WEB-INF/lib directory, whereas here it's a single line directive. It's really powerful.
Here's my +1 to make it easy for users to use Groovy scripts.
Thanks
-Vincent
Hi devs,
Since 1) we are currently testing only on one browser (FF 3.6 right now) and 2) since FF 6 is now the mainstream version and since 3) we have an issue with latest selenium2 and FF 3.7 (see http://code.google.com/p/selenium/issues/detail?id=2320), I propose to update our agents to use FF 6.0.
I'm going to proceed so shout if you don't agree.
Thanks
-Vincent
Hi devs,
It seems someone has been adding some components in the XWiki Platform jira project (such as "PDF Export", "Plugin API", "HTML Export", etc).
I thought we had agreed that the components listed there would only be modules. This is how Thomas and myself configured it.
Unfortunately JIRA doesn't have an activity stream to know who did this :)
Could that person come up so that we have a discussion?
Thanks
-Vincent
Hi devs,
We need some volunteer to:
- upgrade xwiki.org from 2.6 to 3.1 (http://www.xwiki.org/xwiki/bin/view/XWiki/Migrations)
- upgrade myxwiki.org from 3.1 to 3.2M2 (to test the 3.2 upcoming release) (http://myxwiki.org/xwiki/bin/view/XWiki/Upgrading)
I have done it a lot recently and I think it's fair that it's not always the same person who do it.
So is there someone who has never done it or hasn't done it in a while who's willing to do one of these?
(Note that I can be available to do it with this person to explain things or be there in case of an issue)
Thanks
-Vincent
Hi devs,
It would be good to release XWiki Rendering to Maven Central since it's a standalone library and we've already had requests from people who are using it but cannot release their own artifacts to Maven Central since XWiki Rendering JARs are not there.
I'm thus proposing to fix this by release XWiki Commons and XWiki Rendering to Maven Central.
Advantages:
- technical marketing
- makes it easier to external people to use XWiki rendering (no need to modify their settings.xml and add the XWiki remote repo)
- makes it possible for people depending on XWiki Commons/Rendering to publish their own artifacts to Central
Here's my +1
Thanks
-Vincent
Hello fellow developers,
I had a look at the index that we use in Curriki and one of them is suspicious to me.
As expected id and names are index keys for most table, that is correct.
However, one is suspect in several of the value tables: index on XWL_VALUE.
Do I understand correctly that this means these indexes is loaded into a btree (i.e. a ram-representation) of all the strings of xwiki, e.g. all the documents page content??
This seems wrong to make an index for this.
But http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Database+Administration indeed recommends that:
> create index xwl_value on xwikilongs (xwl_value);
> create index xwi_value on xwikiintegers (xwi_value);
> create index xws_value on xwikistrings (xws_value);
> create index xwl_value on xwikilargestrings (xwl_value(50));
So... can someone explain me why such indices are created?
Would it be to honour the MySQL-based user-search?
thanks in advance
Paul
> mysql> show index from xwikilargestrings;
> +-------------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
> | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
> +-------------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
> | xwikilargestrings | 0 | PRIMARY | 1 | XWL_ID | A | 468592 | NULL | NULL | | BTREE | |
> | xwikilargestrings | 0 | PRIMARY | 2 | XWL_NAME | A | 937185 | NULL | NULL | | BTREE | |
> | xwikilargestrings | 1 | idl_value | 1 | XWL_VALUE | A | 312395 | 50 | NULL | YES | BTREE | |
> | xwikilargestrings | 1 | FK6661970F283EE295 | 1 | XWL_ID | A | 468592 | NULL | NULL | | BTREE | |
> | xwikilargestrings | 1 | FK6661970F283EE295 | 2 | XWL_NAME | A | 937185 | NULL | NULL | | BTREE | |
> +-------------------+------------+--------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
> 5 rows in set (0.00 sec)
Dear all,
I am working on extend the putPage method to add 'copy' and 'rename'.
uriInfo is used to get the additional query parameter, e.g.,
(1) ?copyFrom=aWiki:aSpace.oldPage
(2) ?moveFrom=aWiki:aSpace.oldPage
Following Eduard and Sergiu's emails,
XWiki.copyDocument(sourceDocumentReference, targetDocumentReference,
context) and
XWikiDocument.rename(newDocumentReference, context) are utilized to
solve (1) and (2), respectively.
Now I want to confirm the behaviors of copyFrom and moveFrom.
a) If the new page resource does not exist,
1. copyFrom method will return the newly created resource, and
"copied from xxx" is mentioned in the page history of new page.
The source page remains untouched.
2. moveFrom method will also return the newly created page and
source page is deleted.
b) If the new page resource already exists,
both copyFrom and moveFrom will do nothing. The existing page and
source page remain untouched.
Are these the desired functions for copyFrom and moveFrom methods?
Best regards
Jun Han
Hi devs,
Since we're having a hard time stabilizing our build and since there are always stuff to do to improve our build I'm proposing to introduce the notion of Build Day as an experiment.
The idea is that on this day all developers will focus only on improving the Build. More specifically:
- working on making all tests pass
- fix flickering tests if any
And if there are no more tests issues then the following items can be considered too (as a secondary objective only since the main goal is fixing the build):
- upgrade versions of maven plugins
- make the build run faster
- look for TODO in the pom.xml and fix them if possible
- etc
WDYT?
What about doing our first Build Day on Thursday the 25th of Augsut 2011?
Thanks
-Vincent
PS: This is NOT replacing the Build Manager role. It's an addition to it.