Hi extended wiki coders,
What about storing the webapp's i18n data (currently stored in
ApplicationResources_xx.properties files) in a Document? That way, the
i18n resources too can be edited collaboratively. It is useful
when you develop an application on top of XWiki that requires many
additionnal i18n words.
I've started to implement this in the XWikiMessageTool class:
http://www.xwiki.org/xwiki/bin/view/Dev/XWikiMessageTool
I'm currently using this modified XWikiMessageTool for managing the i18n
resources of my site and I'm quite happy with it for now. I've added the
tag "{table}" at the beginning and at the end of the list of words, so
that the resources get formatted nicely in the wiki. This has no side
effects in the Properties map.
Stéphane
Hi,
I am one of the ObjectWeb webmaster. For info, It seems that the xwiki
svnroot repository is currently corrupted for some reason:
> svn co svn://svn.forge.objectweb.org/svnroot/xwiki/trunk
svn: Berkeley DB error while opening 'nodes' table for filesystem
/svnroot/xwiki/db:
Cannot allocate memory
I am currently trying to recover the repository using "svnadmin recover" and
will try to modifiy the defaut berkeley db config (maybe it would be better
to switch to fsfs backend?).
See: http://subversion.tigris.org/faq.html#bdb-cannot-allocate-memory
I have a backup of whole repository until revision 485 in any case.
Sorry for the inconvience,
regards,
--
Mathieu Peltier
ObjectWeb Consortium - INRIA
E-mail: mathieu.peltier(a)inrialpes.fr
Phone: +33 4 76 61 55 25
JabberID: mpel(a)jabber.fr
One thing I'm planning to do, is liberate the special characters in
document names. I've had too many problems not accepting accents in wiki
page names, even though for european accents they are converted to
non-accentued characters automatically when used in a link. The problem
is that this cannot be made general because some languages have special
characters that do not have a "non-special" fallback..
So I'm thinking of letting any character being chosen in Wiki page
names, using the URL encoding conversion of course.
Doing this is not too much a problem. One other thing is what to do with
white spaces.
Currently if you type
[new page]
the link created is
newpage
I've had some complaints about it because when we show page searches it
shows a concatenated text string instead of a nice page name.
Another problem is that it makes it very difficult to search
automatically for backlinks in the database since the white space could
be anywere in the database.
Backlinks would be usefull for displaying and for moving pages and
automatically updating links.
What I propose is to now default to
new%20page
for this link which one exception when the page 'newpage' already exists
(this treatment could be an option). This way there would be backwards
compatibility.
Linked with the change for non alphanumerical characters it would free
page naming.
What do you think ?
Ludovic
--
Ludovic Dubost
XPertNet: http://www.xpertnet.fr/
Blog: http://www.ludovic.org/blog/
XWiki: http://www.xwiki.com
Skype: ldubost AIM: nvludo Yahoo: ludovic
Hi,
as I'm working on the TOC stuff which will allow automatic numbering
I would like to discuss an aspect that would influence how the
numbering is realized. In particular, I'm interested in knowing what
is the typical document structure you guys use and how you apply the
headings into document sections. What I have seen on xwiki.org is a
mixture of various patterns like:
1 title
1.1 chapter1
1.1.1 subchapter1
1.1 chapter2
or
1.1 title
1.1.1 chapter1
1.1.1.1 subchapter1
1.1.1 chapter2
or
1 chapter1
1.1 subchapter1
1 chapter2
or
1.1 chapter1
1.1.1 subchapter1
1.1 chapter2
In our team we used to use this pattern:
1 title
1.1 chapter1
1.1.1 subchapter1
1.1 chapter2
The problem is that generated numbered TOC than would look like:
1 title
1.1 chapter1
1.1.1 subchapter1
1.2 chapter2
which is not consistent with our company's standard Word documentation
structure (a common document structure, IMHO) which has this natural
TOC:
title
1. chapter1
1.1 subchapter1
2. chapter2
...
My question is whether you think it would be good idea to treat a
document title as a specific part of document and introduce a new tag
like {title} or 0 ... and have its own CSS item. In Confluence the
document title is equal to physical wiki document name and rendered
separately on top of the page.
Another possibility which would solve the generation of TOC issue is
to add an option to the TOC tag what initial heading level it should
start the generation at.
I would personally prefer to treat document title specifically as I
consider the mentioned TOC option as a kind of hack, IMHO.
Jiri.
Luis,
You need to try using the SVN move command instead of deleting and adding.
That will preserve the history of files and it'll be lighter on the commit
diffs too.
Thanks
-Vincent
> -----Original Message-----
> From: Luis Arias [mailto:kaaloo@users.forge.objectweb.org]
> Sent: mercredi 20 avril 2005 22:21
> To: xwiki-commits(a)objectweb.org
> Subject: [xwiki-commits] r469 - in xwiki/trunk/src: . main main/com
> main/com/xpn main/com/xpn/xwiki main/com/xpn/xwiki/api
> main/com/xpn/xwiki/atom main/com/xpn/xwiki/atom/lifeblog
> main/com/xpn/xwiki/cache main/com/xpn/xwiki/cache/api
> main/com/xpn/xwiki/cache/impl
>
> Author: kaaloo
> Date: 2005-04-20 22:20:54 +0200 (Wed, 20 Apr 2005)
> New Revision: 469
>
> Added:
> xwiki/trunk/src/main/
> xwiki/trunk/src/main/com/
> xwiki/trunk/src/main/com/xpn/
> xwiki/trunk/src/main/com/xpn/xwiki/
> xwiki/trunk/src/main/com/xpn/xwiki/XWiki.java
[snip]
Hello all,
I've been busy this afternoon doing some housecleaning in the code base:
- seperate folders for
src/main - core java
src/test - unit tests
src/test-cactus - integration tests
I've been building against the latests stable jars I could find,
including hibernate 3.0.1 for which I had to change a lot of imports
and some stuff in the configuration files.
I just commited the version numbered jars because we are going to be
playing with Maven 2 tomorrow night at the OSSGTP and I figured it
might be easier to write the dependencies that way.
I also ported the RSS macro to Rome 0.6 because it broke on the latest
commons-digester. Rome sure is a sweet API ! We're going to be doing
some other interesting things with Rome.
So all of this stuff is still on my machine while I continue unit
tests (actually these are quite long because a lot of them are
actually integration tests and bring up hibernate (with showSql = true
:( ) etc...), but since cruise control is coming back up, I'm going to
just make sure the build runs and ant release works.
I've been working on tests for my lifeblog stuff, and I definitely
like running them one at a time directly in eclipse, even the server
tests through Jetty. So I have to see how to refactor that so the
other tests benefit and for it all to be ant and ide compatible. I
bet you have a lot of ideas on that one Vincent ! ;)
My concern, is what to do about your diffs and how changes to the src
hierarchy could affect your current work. I hope that doesn't cause
you too much pain. Please let me know if you have any ideas on this
issue because I'm a bit stumped. Actually maybe I can just tell you
when I start to commit and when its done and then you can move your
sources in the right place (easy to do) and update.
Cheers,
--
Luis Arias
http://www.xwiki.comhttp://www.innover-entreprendre.net
+33 6 14 20 87 93 mobile
Hi Bruce,
Nice to meet you.
The first test you need to do is start XWiki with an empty database
connected to the Firebird database.
In xwiki.cfg you should setup "xwiki.superadminpassword=toto" that will
activate the "superadmin" user with password "toto" which will be able
to setup rights.
From there you should be able to access an empty wiki and create some
pages to see if everythings works.
Features to test are:
1/ Pages
2/ Attachments
3/ Versions
One of the key features needed is the support for large text fields
(without size limits).
Oracle can't do this (except in the latest versions) and required some
specific hibernate hacks.
Converting the default Database to Firebird might be more complex.
You can download the default DB on Sourceforge. It is a zipped SQL file
exported from Mysql.
However I think it is quite full of specific MySQL sql.
I think there are some improvement in MySQL 4.1 to the export with
allows for "compatible" export so you might need to import/export in
MySQL to create a more compatible file.
Ludovic
Bruce Bacheller a écrit :
>Thanks for the Quick response.
>
>I am on the Firebird Foundation (FirebirdSQL.org) and will get you any support needed from our THE FirebirdSQL side to get it working.
>
>I will host it on my site which is FirebirdXML.info or Infodes.org.
>That is Linux/Apache/Tomcat.
>
>We are having hosting our user conference in germany next Month May 15th and I would like to have it operational for that event. I will make sure that XWiki gets good exposure to the Firebird Community as well.
>
>We are pretty close to Oracle in syntax.
>
>Can you send me a copy of the SQL code in MySQL (table creation) so I can review it and set up test.
>
>Looks like XWiki has developed A VERY NICE piece of Open Source Software.
>
>Regards,
>Bruce
>
>-----Original Message-----
>From: Ludovic Dubost [mailto:ludovic@xwiki.org]
>Sent: Wednesday, April 20, 2005 3:54 PM
>To: Bruce Bacheller; xwiki-dev(a)objectweb.org
>Subject: Re: Database Support
>
>
>Hi,
>
>We haven't made any tests, but XWiki uses hibernate and JDBC so there is
>a good chance that it works.
>You will need to try. We are open to requests in case of problems.
>
>Ludovic
>
>Bruce Bacheller a écrit :
>
>
>
>>Do you have plans to Support Firebird the Open Source DBMS ?
>>
>>
>>
>>
>>
>
>
>
>
--
Ludovic Dubost
XPertNet: http://www.xpertnet.fr/
Blog: http://www.ludovic.org/blog/
XWiki: http://www.xwiki.com
Skype: ldubost AIM: nvludo Yahoo: ludovic
Hi,
We haven't made any tests, but XWiki uses hibernate and JDBC so there is
a good chance that it works.
You will need to try. We are open to requests in case of problems.
Ludovic
Bruce Bacheller a écrit :
> Do you have plans to Support Firebird the Open Source DBMS ?
>
>
>
--
Ludovic Dubost
XPertNet: http://www.xpertnet.fr/
Blog: http://www.ludovic.org/blog/
XWiki: http://www.xwiki.com
Skype: ldubost AIM: nvludo Yahoo: ludovic
Hi All,
I'm currently setting-up cruise control 2.2.1 to setup a continious
build on the latest SVN source.
It will compile XWiki and run tests everytime there are changes in SVN,
including the cactus based tests.
Results will be published on a web site: http://build.xpertnet.biz and a
notification email will be sent either to the dev list or the commit lists.
Any comments about the choice: dev or commit list ?
I might automatically publish the artifacts to one of the xwiki.com
production machines.
The build machine is running Mandrake Linux.
Ludovic
--
Ludovic Dubost
XPertNet: http://www.xpertnet.fr/
Blog: http://www.ludovic.org/blog/
XWiki: http://www.xwiki.com
Skype: ldubost AIM: nvludo Yahoo: ludovic
Hi Ludovic
Thanks for the help. I can now compile the svn source.
On 4/19/05, Ludovic Dubost <ludovic(a)pobox.com> wrote:
>
> Hi Patrick,
>
> I think it is just that you need junit.jar in your ant installation.
>
> Ludovic
>
I now have the following questions:
1. the version # is now Version 0.9.500, smaller than the previous
downloadble war file ( 0.9.543). Is this correct?
2. I have attempt to save some chinese character on a new page then
try to edit it again using the default (html ) editor. Both the html
page source and the editable version still show 大 etc for a
chinese character, just like the previous version (0.9.543).
Besides, the page source shows the following:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<meta name="language" content="en" />
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
I suppose that the xml/html file is encoded in ISO-8859-1 instead of
UTF-8. Is there anywhere I could change the default? And it is the
same no matter I choose the page/document language as en or zh.
3. I have also try to export this to PDF and the result supprise me.
Please check the attachment. The three encoded chinese characters
have been truncated to only one code with some left over of the first
and last character. I will take a look at the svn revision 452 about
the uft-8 changes set soon and try to find out the problem.
Thanks in advance.
Patrick Lee