Hi,
Does anyone know why we have the following LANG property defined in
our startup scripts:
#!/bin/sh
export LANG=fr_FR.ISO8859-1
JETTY_HOME=.
JETTY_PORT=8080
JAVA_OPTS=-Xmx300m
java $JAVA_OPTS -Dfile.encoding=iso-8859-1 -Djetty.port=$JETTY_PORT -
Djetty.home=$JETTY_HOME -jar $JETTY_HOME/start.jar
I think this is bad as it's french and I don't think we should set it
for the user.
That said, if it's there its probably because it was required in the
past, hence my question here.
To summarize: anyone sees any issue if I remove that "export LANG" line?
Thanks
-Vincent
Hi,
I'd like to attach an image to the current document but the image is
attached to another document. I couldn't find the answer in the FAQ.
It would be great if the {image:} radeox macro allowed for this.
Right now the only solution I'v found (which is a bit ugly) is to use an
HTML <img> tag as in:
<img src="/xwiki/bin/download/XWiki/Toolbar/image.gif" />
Any other solution I would have missed?
Thanks
-Vincent
___________________________________________________________________________
D�couvrez une nouvelle fa�on d'obtenir des r�ponses � toutes vos questions !
Profitez des connaissances, des opinions et des exp�riences des internautes sur Yahoo! Questions/R�ponses
http://fr.answers.yahoo.com
Hi,
Sometime ago, there was a discussion regarding how should the document
history be stored in a better way.
Right now, the complete history is stored as one field in the xwikidoc
table. From my PoV, this has some major disadvantages:
- loading an older version requires parsing all the history -> memory
inefficiency
- as the documents grow older, loading a document will take a lot of time ->
time inefficiency
- queries on archives cannot return just one version, but they match the
whole document (somewhere in the history, there was a version containing
"search term")
The blocking issue with storing old version in a different table was, at
that time, the fact that a document archive should contain all information
needed for completely restoring the document, like content, metadata,
objects.
I don't think that is actually an issue. We are archiving document versions,
but we're joining all versions in one large string. Why don't we archive the
complete version, but one version per row?
So, the archive table should look like:
- document name
- version number
- language (for translations)
- content
- archived metadata (one field, or the same fields as in xwikidoc)
- archived objects (one field)
- attachment names and versions
It is not like storing the version as a normal document is, with separate
objects and properties, but at least it provides a better storage and
retrieval mechanism, and it separates a bit the parts of a wikidocument -
content, metadata, objects.
WDYT?
--
http://purl.org/net/sergiu
Hi,
I looked today over the XWikiMessageTool class, and I must say that I'm not
quite satisfied with it.
First, there was XWIKI-919, which I implemented. OK, I understand that files
stored on disk should be charset independent, so only ASCII characters are
supported by the ResourceBundle class (jvm) .But, when I can edit a wiki
document for storing bundles, I expect it to accept all the characters the
wiki supports (in my case, it was an UTF8 instance). I had some troubles
fixing this, since the JavaDoc says that bundles accepts only ASCII
characters, but it understands and parses unicode references ( \u0123 ).
Maybe I did something wrong, but doing content.replaceAll("\u0139",
"\\u0139") resulted in the string u0139 being displayed in the page. So I
had to trick it into believing that the component bytes of the encoding are
ASCII characters and manually restore the multibyte chars.
Second, I don't like the fact that XWIKI-921 was not already implemented.
Third, I don't like the cache refresh mechanism. It retrieves the
XWikiProperties->documentBundles property for each request, and It retrieves
the bundle documents for every request and checks if it must be refreshed or
not. Why isn't the com.xpn.xwiki.notify package used? It allows registering
callback handlers for specific document changes. How I see it:
- at startup, register a handler for XWiki.XWikiPreferences (so that we know
when the documentBundles property might change).
- remember the list of document bundles, don't ask it for each request
- also register handlers for the current bundle documents and load the
strings from these documents
- when XWikiPreferences is changed, if the documentBundle property is also
changed, remove the unused bundles and build the new ones
- when a undle document (or a translation for it) is changed, rebuild the
bundle for that document
This should speedup the code a bit, it makes use of a nice, but mostly
unknown feature, it doesn't log an error for each request when a specified
document is not found in the wiki, and it doesn't require so many variables
(previousDates, docsToRefresh).
Fourth, as I said above, if a document is specified in the documentBundles
property, but it does not exist in the wiki, for each $msg.get call an error
is logged. And there are a lot of calls for each request.
Now, in my opinion this is a nice way to get in the core of XWiki for a
newcomer, so does anybody want to write the changes I mentioned? Also, this
is a good occasion to document the event notification mechanism, in JavaDoc
and on www.xwiki.org
Regards,
Sergiu Dumitriu
--
http://purl.org/net/sergiu
There are many properties stored in VARCHAR(255) fields, which sometimes
isn't enough. This already causes
http://jira.xwiki.org/jira/browse/XWIKI-883 . Changing to variable length
field doesn't have any major side effects, as I know.
As pro arguments:
1. The size of the database will not increase
2. There are already some fields stored as mediumtext and longblob, so it's
not something new in the database
3. The fewer limits there are, the better
4. Issues like XWIKI-883 will be fixed
... and maybe more
Is there something I'm missing that prevents this?
--
http://purl.org/net/sergiu
Hi,
Starting from today I'm going to spend some time every week on the V2
Architecture/Redesign. Today I'd like to start at the heart of the
topic with the domain model. Here's my proposal:
* Model classes are classes representing the XWiki model. The main
classes are:
- Farm
- Wiki
- Space
- Document
- XObject
- XClass
- (probably lots of others)
* As you can see I'd like to introduce the Space and Farm objects
* We create a model/ build module in trunks-devs/xwiki/model for
storing model classes
* Model classes cannot access other classes outside of the model/
module. They are meant to be used by components to provide services
but they shouldn't provide services themselves. They can methods to
manipulate their fields and they can call each other but they cannot
call outside of their model.
* We use the org.xwiki.model package for storing Model classes
* These model classes are all user public (API)
WDYT?
Barring any negative feedback I'll start implementing this today and
in the coming days. One question that remains unanswered in my mind
is how do we integrate back these model classes into the V1
architecture. I think we should be able to retrofit the existing
classes to use these classes by composition. For example the Document
object could have 2 fields: one org.xwiki.model.Document and one
org.xwiki.model.Space. The XWiki object could have 2 fields: Wiki and
Farm, etc. I'm not sure how this would work out though. Any idea is
welcome.
Thanks
-Vincent
Hi,
Here's a list Catalin and I came up with while brainstorming about
ideas around testing for XWiki. Please feel free to comment, argue,
suggest new ideas, etc.
0) Agree that we want to have our functional tests written in Java.
Vs using Selenium IDE only
Pros:
- DSL -> easy to write and low maintenance (if a UI changes it's
fixed in one place)
- ability to write more complex tests with branch control (if, loop,
etc)
- ability to write tests easily (try creating a test that types some
content in our Tiny MCE editor and you'll understand that part ;-))
- run automatically as part of our build/CI
Cons:
- require a minimal knowledge of Java. Thus it's going to prevent us
from having non technical people write functional tests for us.
I'm currently 100% convinced that our Java choice is right but I know
Ludovic isn't fully convinced which is why I'm listing this point
here to see what others think.
1) Finish setting up our Java functional test fwk.
Goals:
- make it as easy as possible to use.
- make it low maintenance (ie tests shouldn't fail often and if some
UI is changed it should be very easy to fix the tests)
Tasks:
- Continue improving our DSL for testing. Specifically add the
ability to test over several skins and create one DSL per skin.
- Add DSL methods for asserting the rendered HTML (cf Catalin's tests
in http://jira.xwiki.org/jira/browse/XWIKI-1207)
2) Add clover support in the build to assert Test Coverage (TC)
We need this to evaluate easily what we are not testing. It should
report on both unit and functional tests execution.
Note: This is easy to do with the nice Clover plugin for m2 ;-) (I'm
the author of that plugin)
3) Add functional tests to cover 30% of the code under test.
This is a first milestone that we should aim towards. For this we
need to have 2) to see where we stand. Our target goal should be in
the range of 60-70% TC. For my past experience this is a good
ballpark figure and having this results in:
a) well tested code and confidence something new doesn't break what
exists. This is very interesting for us in the 1.x time frame as we
don't want to add regressions to 1.0. But also it gives us greater
confidence for redesigning internal parts of XWiki for implementing
the 2.0 architecture
b) the ability to release in production at each release (provided new
features get tests added when they're added)
4) Explore in-VM testing
The idea is to have several VM (Virtual Machines) running different
OSes for example (Windows, linux, Mac) and a combination of Databases
(Oracle, MySQL, PostGre). You run the XWiki maven build in them so
that the tests get executed in VM. This allows 2 things:
a) test for different environment. For example this tests our start/
stop scripts, our database setups
b) this allows more complex tests to be written like: virtual wikis
c) this allows to implement hard to write tests like the "add
attachment" one. For this you need the attachment to be on your file
system. With a VM you control the full setup of the VM and you can
restore the VM at each run
5) create a test plan listing tests to be written
We need this written in JIRA. This goes with 3).
6) Explore using Reality QA
This is a nice tool/service created by Patrick Lightbody (who's a
friend).
See http://www.realityqa.com/screencasts/realitycheck_tutorial/
RealityCheck.htm to see it in action.
Right now the main issue is that it won't run selenium tests written
in Java but Patrick says it's coming:
http://community.realityqa.com/thread.jspa?threadID=1010
The other thing that could be nice is the ability to upload our app
inside their VM:
http://community.realityqa.com/thread/1011 (but we can host our VMs
to start with).
What Reality QA buys us is the ability to run all our tests on all
platforms/browsers.
7) Improve the maven build
For example:
- ability to run the app on a defined port so that it doesn't
interfere with an already running xwiki instance
- stop the container when a test fails
8) Finish setting up our CI tool
Tests are pretty useless without CI. I'm working on setting up
TeamCity. I'm having some issues that I'm debugging with jetbrains
(at first glance seems caused by ObjectWeb SVN).
Catalin will work on some of these points as part of GSOC
(http://www.xwiki.org/xwiki/bin/view/GoogleSummerOfCode/
FunctionalTestSuite). We haven't decided yet which points he'll work
on but at least he'll do/help on the following: 1, 3 and 5. 3 and 5
being the most important items.
WDYT about all these?
Thanks
-Vincent
Hi!
Tag clouds (http://en.wikipedia.org/wiki/Tag_cloud) seem so in these
days, so I made a simple implementation using the Main.Tags in xwiki.
The styling isn't too sexy, but that should be easy to configure.
Here is the code for use in a panel or similar place:
#panelheader('Tag cloud')
#set($query = "select elements(prop.list) from BaseObject as obj,
DBStringListProperty as prop where obj.className='XWiki.TagClass' and
obj.id=prop.id.id and prop.id.name='tags'")
#set( $allTags = $xwiki.sort($xwiki.search($query)))
#set( $tagsWithCount = $xwiki.metawiki.getTagsWithCount($allTags))
#set($query = "select distinct elements(prop.list) from BaseObject as
obj, DBStringListProperty as prop where obj.className='XWiki.TagClass'
and obj.id=prop.id.id and prop.id.name='tags'")
#set( $tagsDistinct = $xwiki.sort($xwiki.search($query)))
#foreach($tag in $tagsDistinct)
<span style="#getStyle($tagsWithCount.get($tag))"><a
href="$xwiki.getURL("Main.Tags", "view", "tag=$tag")")>$tag</a></span>
#end
#panelfooter()
#macro( getStyle $count )
#if ($count < 2) "font-size: 0.80em;"
#elseif ($count < 3) font-size: 1.00em; }
#elseif ($count < 4) font-size: 1.20em; }
#elseif ($count < 5) font-size: 1.40em; }
#elseif ($count < 6) font-size: 1.60em; }
#elseif ($count < 7) font-size: 1.80em; }
#elseif ($count < 8) font-size: 2.00em; }
#else font-size: 2.50em; }
#end
#end
And here's the java code invoked:
public static HashMap<String, Integer>
getTagsWithCount(ArrayList<String> allTags) {
String tag = null;
String previousTag = null;
HashMap<String, Integer> tagsWithCount = new HashMap<String, Integer>();
int count = 1;
for (int i = 0; i < allTags.size(); i++) {
previousTag = tag;
tag = allTags.get(i);
if (tag.equals(previousTag)) {
count++;
} else {
count = 1;
previousTag = null;
}
tagsWithCount.put(tag, new Integer(count));
}
return tagsWithCount;
}
Is this anything you would want to include in xwiki, or is it too
trivial to bother with? See the attached file for a sample screenshot.
best regards :-)
Thomas Drevon
Hi committers and everyone,
Jean-Vincent is starting a new XWiki project called: XWiki Enterprise
Manager. The goal is to create an application to administer/manager a
XWiki farm (ie virtual wikis).
This was initially a project we've done internally for customers (but
under an OSS license). We've decided to make it public so that
everyone can benefit from it. It's pretty raw at the moment and we're
starting from scratch. You're all most welcome to participate and
contribute to it.
Jean-Vincent is going to send a more detailed email very soon about it.
We're creating a separate JIRA project for that project. We'll create
some space on xwiki.org for it too a bit later on.
Note that Nicolas Fournier (who is new to this community) has
expressed interest in helping us build XEM and he'll join Jean-
Vincent real soon.
Thanks
-Vincent