Hi all,
While implementing XWIKI-1264
<http://jira.xwiki.org/jira/browse/XWIKI-1264>(Transform the AllDocs
page into an index with A-Z links to show docs by
letter) I faced the following problem :
I wanted to show letter links only for relevant letters (for the user not
being faced with empty result for letters that has no document related). But
how to search for distinct letters from the wiki ?
1- we use $xwiki.search("select distinct substring(doc.name, 1, 1) from
XWikiDocument as doc where 1=1")
but then, the document depends on a privileged API, if someone without
programming right edit and saves it, it breaks.
2- we implement a method like "getDocumentsIndex()" in the XWiki API
but as Vincent says, it's not really generic, it applies only on this case.
3- we call a groovy script that holds privileged API calls (possibly the
whole wiki privileged calls)
but, it's not really clean either, and it's too bad we have to maintain two
docs for a small script like the AllDocs one.
As for now we found none of this option good enough, there will be all
letters proposed... with the possibility the users faces an empty result
page.
What do you think? Are there any other option ?
I think this problem leads to the more generic problem of how to handle
privileged calls in the default wiki.
Jerome.
I have searced the xwiki site, list history, the web, subscribed to xwiki-users, before asking this:
I am running xwiki standalone, the file is wiki-1.1-milestone-1.tar.gz and it says "Standalone installation including a Jetty container and an HSQLDB database all set up. Should be used by first-time users"
Please tell me where are the files (or directories) that I need to save to have a full backup. Both database and configs but not the libs. It is running in /home/alain/dev/java/xwiki.
I compared the original .tar.gz with my modified xwiki, appart from logs, only these files are different:
db/xwiki_db.script
db/xwiki_db.properties (only the date changed)
temp/imageCache/ (empty directory, belongs to root)
db/xwiki_db.script looks like an sql-dump, and my modifications are included. Could it be that a database dump is created every time I modify something? Anyway the database apears *not* to be inside the xwiki directory...
PLEASE HELP, I want to make a full backup, but without all the libs and fixed stuff (the whole directory is 55Mb! ).
Thansk for any help,
Alain
Cheers guys!
During my research of the way Google does stuff with their Gdata Protocol,
I've come across some decisions that have to be made (more like priorities -
what should work first and what to improve later).
First off - means of authentication. In order to use gdata, you need a
google account (gmail counts as one). As an alternative, you can, upon
invitation, view a document without having to log in (as stated
here<http://docs.google.com/support/bin/answer.py?answer=47597&topic=9378>).
That would be a good way of interaction with visitors who don't have
editing permissions (as of yet I've not found a way to send invitations
using the API, but there should be one... I hope). Then comes the account
part:
- A wiki can request that each editor use his own account to edit.
That would mean:
- + being able to collaboratively edit documents (the service
allows for multi-user edit) <still requires invitation>
- - require a download to my account, edit, send back to wiki
kind of work flow (much like download to desktop, edit, upload)
!! I might
be wrong on this, needs further research since there is the collaborative
side of things !!
- more + / - to come (if anybody can think of other things to
list here, please do)
- Any one wiki instance admin can create a google account and use it
as a generic wiki acount. That would mean:
- - not being able to edit collaboratively (might not even be
able to edit different documents at the same time - not sure if Google
allows multiple log-ins per account)
- + edit in place
- - limit of 200 spreadsheets / 5000 documents or images per
account (would be ok for a small to medium sized wiki, but a larger one
would seriously suffer especially when it comes to spreadsheets)
- more + / - to come (again, any ideas are welcome additions to
the general picture)
I'd like to take this occasion to ask you, the devs (and users) of xwiki
what you would like to see from Google Docs Integration (prioritize). I'd
really appreciate knowing what the expectations are. Also, any upsides or
downsides to the issue of authentication are welcome into discussion .
Another issue is the fact that the gdata api provided by google will have to
be integrated into the xwiki trunk (I'd like to know how this should be
done).
If possible, take some time to read the references bellow and others
related, and comment upon what should be done and how.
Thank you for your time and patience,
Radu Danciu
References:
http://docs.google.com/support/bin/answer.py?answer=47597&topic=9378<http://docs.google.com/support/bin/answer.py?answer=47597&topic=9378><http://docs.google.com/support/bin/answer.py?answer=47597&topic=9378>
http://docs.google.com/support/spreadsheets/bin/answer.py?answer=37603http://docs.google.com/support/bin/answer.py?answer=37560&topic=8613http://code.google.com/apis/gdata/client-java.htmlhttp://en.wikipedia.org/wiki/Google_Docs_&_Spreadsheets
I'm running into the following build error upon trying to ant standalone the
xwiki trunk along with gdata jars and my classes for google docs:
[javac]
/tmp/clover50747.tmp/src50748.tmp/com/xpn/xwiki/gwt/api/client/GdataAttachmentListInteraction.java:10:
generics are not supported in -source 1.4
[javac] (try -source 1.5 to enable generics)
[javac] private Map<String, ListEntry> entriesCached;
[javac] ^
[javac] 1 error
Sergiu hinted that it's because of Java 1.4 (Gdata needs at least 1.5). How
do I get over this?
Also, what modifications should i make to pom.xml in order to add some jars
to the project? (gdata api jars - originally sources, that have to be built
into jars)
Thanks for your time,
Radu Danciu
Hello,
I have problems making a query to retrieve from the database all the objects that are from a certain class (e.g "TryClass") and of a certain type as well (e.g "StaticListClass").
Can anyone give me a hint please?
Thank you very much.
Evelina Slatineanu
Hi,
Here's a list Catalin and I came up with while brainstorming about
ideas around testing for XWiki. Please feel free to comment, argue,
suggest new ideas, etc.
0) Agree that we want to have our functional tests written in Java.
Vs using Selenium IDE only
Pros:
- DSL -> easy to write and low maintenance (if a UI changes it's
fixed in one place)
- ability to write more complex tests with branch control (if, loop,
etc)
- ability to write tests easily (try creating a test that types some
content in our Tiny MCE editor and you'll understand that part ;-))
- run automatically as part of our build/CI
Cons:
- require a minimal knowledge of Java. Thus it's going to prevent us
from having non technical people write functional tests for us.
I'm currently 100% convinced that our Java choice is right but I know
Ludovic isn't fully convinced which is why I'm listing this point
here to see what others think.
1) Finish setting up our Java functional test fwk.
Goals:
- make it as easy as possible to use.
- make it low maintenance (ie tests shouldn't fail often and if some
UI is changed it should be very easy to fix the tests)
Tasks:
- Continue improving our DSL for testing. Specifically add the
ability to test over several skins and create one DSL per skin.
- Add DSL methods for asserting the rendered HTML (cf Catalin's tests
in http://jira.xwiki.org/jira/browse/XWIKI-1207)
2) Add clover support in the build to assert Test Coverage (TC)
We need this to evaluate easily what we are not testing. It should
report on both unit and functional tests execution.
Note: This is easy to do with the nice Clover plugin for m2 ;-) (I'm
the author of that plugin)
3) Add functional tests to cover 30% of the code under test.
This is a first milestone that we should aim towards. For this we
need to have 2) to see where we stand. Our target goal should be in
the range of 60-70% TC. For my past experience this is a good
ballpark figure and having this results in:
a) well tested code and confidence something new doesn't break what
exists. This is very interesting for us in the 1.x time frame as we
don't want to add regressions to 1.0. But also it gives us greater
confidence for redesigning internal parts of XWiki for implementing
the 2.0 architecture
b) the ability to release in production at each release (provided new
features get tests added when they're added)
4) Explore in-VM testing
The idea is to have several VM (Virtual Machines) running different
OSes for example (Windows, linux, Mac) and a combination of Databases
(Oracle, MySQL, PostGre). You run the XWiki maven build in them so
that the tests get executed in VM. This allows 2 things:
a) test for different environment. For example this tests our start/
stop scripts, our database setups
b) this allows more complex tests to be written like: virtual wikis
c) this allows to implement hard to write tests like the "add
attachment" one. For this you need the attachment to be on your file
system. With a VM you control the full setup of the VM and you can
restore the VM at each run
5) create a test plan listing tests to be written
We need this written in JIRA. This goes with 3).
6) Explore using Reality QA
This is a nice tool/service created by Patrick Lightbody (who's a
friend).
See http://www.realityqa.com/screencasts/realitycheck_tutorial/
RealityCheck.htm to see it in action.
Right now the main issue is that it won't run selenium tests written
in Java but Patrick says it's coming:
http://community.realityqa.com/thread.jspa?threadID=1010
The other thing that could be nice is the ability to upload our app
inside their VM:
http://community.realityqa.com/thread/1011 (but we can host our VMs
to start with).
What Reality QA buys us is the ability to run all our tests on all
platforms/browsers.
7) Improve the maven build
For example:
- ability to run the app on a defined port so that it doesn't
interfere with an already running xwiki instance
- stop the container when a test fails
8) Finish setting up our CI tool
Tests are pretty useless without CI. I'm working on setting up
TeamCity. I'm having some issues that I'm debugging with jetbrains
(at first glance seems caused by ObjectWeb SVN).
Catalin will work on some of these points as part of GSOC
(http://www.xwiki.org/xwiki/bin/view/GoogleSummerOfCode/
FunctionalTestSuite). We haven't decided yet which points he'll work
on but at least he'll do/help on the following: 1, 3 and 5. 3 and 5
being the most important items.
WDYT about all these?
Thanks
-Vincent
On Jun 8, 2007, at 9:34 AM, Catalin Hritcu wrote:
> Author: hritcu
> Date: 2007-06-08 09:34:30 +0200 (Fri, 08 Jun 2007)
> New Revision: 3571
>
> Modified:
> xwiki/trunk/application-test/pom.xml
> xwiki/trunk/application-test/src/test/it/com/xpn/xwiki/it/
> framework/AbstractXWikiTestCase.java
> xwiki/trunk/application-test/src/test/it/com/xpn/xwiki/it/
> framework/AlbatrossSkinExecutor.java
> Log:
> Removed dependency on XMLUnit.
Just to explain: I've agreed with Catalin to remove it because the
code we get is HTML (not XHTML) and xmlunit requires valid XML so the
combo wasn't working that great.
It's also simpler just to use Selenium's internal xpath feature.
-Vincent
Hi,
I'm currently working on the development of a portal based on Xwiki, and
may be what I am using to deploy groovy code could be helpfull.
You can find the code at:
https://protactinium.pps.jussieu.fr:12345/websvn/listing.php?repname=EDOS&p…
The whole stuff is based on autotools (automake/autoconf) and curl.
The very simple configure script builds a property file that stores the
Xwiki server name, port, username, password. There are then bash scripts
that use 'curl' for login in, uploading and downloading Xwiki pages. The
pages are stored as plain text files with .xwiki extension.
I edit the pages using emacs and groovy-mode (syntax coloring,
indentation, parenthesis checking...). The groovy-mode Emacs lisp file
can be found on http://groovy.codehaus.org/Emacs+Plugin
Once the pages edited and saved in emacs, I upload them in Xwiki using
'make upload' and test them using firefox. This can of course be done
directly from Emacs with M-compile.
Thanks to the fact that automake/autoconf support running configure in a
different directory than the sources (i.e. the Xwiki documents), I can
deploy simultaneously on 2 Xwikis, one running localy where I test the
code, and another one running a more stable version. This is done by
creating 2 'build' directories and running configure with
--srcdir=SRCDIR argument.
I know it is not in any case the standard way of developing in Xwiki,
but I find it helpfull. If it can be of any help...
A question now: is it possible to define Xwiki classes the same way?
Right now, I define my classes using the web interface, but if I am
curious to know if it is possible to do it textually?
François