Hi team,
I wrore a report about the different localization strategies we could
have for XWiki comparing:
- a simple tool Jean-Vincent wrote
- an L10N app I started to build
- using launchpad.nethttp://dev.xwiki.org/xwiki/bin/view/Drafts/Selecting+a+tool+for+managing+Lo…
Please review and add requirements if you see requirements that have not been listed. I think we need to look at the applications and evaluate them in more details, especially to see of launchpad.net could work for us.
Let's discuss.
Ludovic
--
Ludovic Dubost
Blog: http://blog.ludovic.org/
XWiki: http://www.xwiki.com
Skype: ldubost GTalk: ldubost
Hi devs,
We currently have a comments plug-in that is written for a custom
core. This plug-in enables us to have hierarchical comments and several
storage implementations.
The issue I encounter when rewriting it to a plexus component is that I
cannot make it independent from the core. I need both read and write
access to the storage, and I could not find a bridge to it. All I can do
is retrieve data, with org.xwiki.bridge.DocumentAccessBridge and
org.xwiki.bridge.DocumentModelBridge, but not write data.
Does anybody know how can I write a component that needs read/write
access to the storage without using XWiki,XWikiDocument and XWikiContext
classes?
Thanks,
Florin Ciubotaru
Hi Sergiu,
On Nov 6, 2008, at 7:22 AM, sdumitriu (SVN) wrote:
> Author: sdumitriu
> Date: 2008-11-06 07:22:48 +0100 (Thu, 06 Nov 2008)
> New Revision: 13997
>
> Modified:
> platform/pom/trunk/pom.xml
> Log:
> [misc]
> Lock down another maven plugin version
> Declare a default version for the shared test dependency
>
>
> Modified: platform/pom/trunk/pom.xml
> ===================================================================
> --- platform/pom/trunk/pom.xml 2008-11-06 06:08:51 UTC (rev 13996)
> +++ platform/pom/trunk/pom.xml 2008-11-06 06:22:48 UTC (rev 13997)
> @@ -171,6 +171,7 @@
> <dependency>
> <groupId>com.xpn.xwiki.platform.tools</groupId>
> <artifactId>xwiki-verification-resources</artifactId>
> + <version>${platform.tool.verification.version}</
> version>
I don't think this is right. There should be no version specified IMO
since this parent POM can be used by any module and some modules will
want to use a different version.
WDYT?
Thanks
-Vincent
> </dependency>
> </dependencies>
> <configuration>
> @@ -271,6 +272,12 @@
> </plugin>
> <plugin>
> <groupId>org.apache.maven.plugins</groupId>
> + <artifactId>maven-plugin-plugin</artifactId>
> + <!-- Lock down plugin version for build reproducibility -->
> + <version>2.3</version>
> + </plugin>
> + <plugin>
> + <groupId>org.apache.maven.plugins</groupId>
> <artifactId>maven-site-plugin</artifactId>
> <!-- Lock down plugin version for build reproducibility -->
> <version>2.0-beta-6</version>
> @@ -396,6 +403,7 @@
> <properties>
> <!-- Only disable checkstyle in the clover profile. by default
> it's on. -->
> <xwiki.checkstyle.skip>false</xwiki.checkstyle.skip>
> + <platform.tool.verification.version>1.12-SNAPSHOT</
> platform.tool.verification.version>
> </properties>
> <profiles>
> <profile>
Hi,
we wish to use xwiki in a large project and find it difficult to see if the
component architeture suits us.
We need to hook into the login process (effectively logging users in through
our own authentication process). Can this be done using a 'plugin' or
'component' for xwiki?
Also: we need to intercept the page creation process and update our local
indexes with certain things when a user saves a page. Can this be done
throuh a plugin/component?
How would one go on about setting an eclipse project up, where a component
can be developed and debugged? Do I need to check out the entire Wiki into
the Eclipse workspace?
--
View this message in context: http://n2.nabble.com/XWiki-usable-for-this-%2B-how-to-go-on-about-component…
Sent from the XWiki- Dev mailing list archive at Nabble.com.
I don't see documentation anywhere which identifies which versions of jar
files are being used (lucene, ecache, etc.). Is there a way to find this
out? Also a way to know which versions are being used in a pending release?
Thanks,
Paul Bernard
Hi,
Right now the XHTML macro parses wiki syntax by default.
For example the following generates a bold:
{{xhtml}}
<p>**bold**</p>
{{/xhtml}}
I propose to change that so that wiki syntax is not parsed by default
so that users will need to use the following to parse wiki syntax:
{{xhtml wiki=true}}
<p>**bold**</p>
{{/xhtml}}
Note that I'm proposing to change the current "escapeWikiSyntax" to
"wiki" (another proposal is to use "allowWikiSyntax").
WDYT?
Thanks
-Vincent
Hi devs,
Since the general discussion about distributions (see
http://markmail.org/message/nqyvd34knm5eqkru) will need some time and
I need to commit the glasshfish distribution build as soon as
possible, I propose:
- add a folder "glassfish"
- add a "derby" subfolder containing the glassfish-derby distribution build
- move all jetty related distributions in "jetty" folder et same lever
than "glassfish" and change all artifact ids as
xwiki-enterprise-jetty-* (xwiki-enterprise-jetty-hsqldb,
xwiki-enterprise-jetty-derby, etc.)
- use -Pglassfish to build glassfish and -Pjetty to build jetty (jetty
being the default one to not change anything to current build except
for the artifacts ids)
here is my +1
WDYT ?
--
Thomas Mortagne
asiri (SVN) wrote:
> Author: asiri
> Date: 2008-11-04 11:31:26 +0100 (Tue, 04 Nov 2008)
> New Revision: 13949
> + private void filter(Node node)
> + {
> + if (node.hasAttributes()) {
> + try {
> + node.getAttributes().removeNamedItem("style");
> + } catch (DOMException ex) {
> + // Not a problem.
> + }
> + }
I don't like this... try-catch code is costly, since creating an
exception takes a lot of time and memory. Can't you check if the 'style'
attribute exists instead?
And a catch block in general should indicate an exceptional execution,
not a normal, expected case.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
On Nov 5, 2008, at 6:32 AM, asiri (SVN) wrote:
> Author: asiri
> Date: 2008-11-05 06:32:43 +0100 (Wed, 05 Nov 2008)
> New Revision: 13965
>
> Added:
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/ImgToWikiFilter.java
> Removed:
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/ImageTagFilter.java
> Modified:
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/transformer/HtmlToXWikiXhtmlTransformer.java
> sandbox/xwiki-plugin-officeimporter/src/test/java/com/xpn/xwiki/
> plugin/officeconverter/HtmlFilterTest.java
> Log:
> Renamed the ImageTagFilter as ImgToWikiFilter to make more sense.
> This filter is only used with xhtml rendering.
Funnily I don't understand what ImgToWikiFilter means whereas
ImageTagFilter is very expressive to me (it performs some filtering on
image tags). I don't understand what an image to wiki means.
Also I don't understand why we need this filter since it converts to
XWiki Syntax 1.0 which we're not supporting.
Thanks
-Vincent
[snip]
Hello,
Following the open question #1 here
http://dev.xwiki.org/xwiki/bin/view/Design/SkinExtensions#HUsage
"
Open question 1: Should $jsx.useFile("filename.js") work for files
located on the disk? This allows the same pull process to be used with
files located in the skin, without requiring SX documents and objects.
I'd say yes. Then, what should the URL look like?
/xwiki/bin/jsx/skins/albatross/somestyle.css is OK?
"
I would like to propose to go even further, and to allow injection of
script tags referring libraries on the cloud or on a different server
using the jsx plugin. This would allow to not have users writing scripts
tags in the body of the document to add a library.
I would see something like :
$jsx.use("http://maps.google.com/maps?file=api&v=2&key=XXX")
or
$jsx.useFile("http://maps.google.com/maps?file=api&v=2&key=XXX")
What do you think ?
Regards,
Jerome.
Hello all,
I started working on a {{map}} macro
(http://jira.xwiki.org/jira/browse/XWIKI-2784).
This raise the question of how (or if) we should work when writing
macros depending on JS APIs (being here google maps, yahoo maps, etc.).
The variants I've envisaged so far :
1a. We write all the needed JavaScript in the macro itself. We do it in
Strings we transform in lists of WordBlock + SpaceBlock we append as
children of a XMLBlock "script". I find this a little painful and not
very natural.
1b. We write all the needed JavaScript in the macro itself. We do it in
Strings we pass as content of a html/xhtml macros blocks.
2a. We write most of the JavaScript in a JSX object (for example a sort
of facade to some google maps APIs), and only the needed calls in the
macro itself (for example the call to load a map in a div element).
For the code in the macro, we use the same strategy as 1a, except that
there is just one of such XML block, and it's relatively short.
The JSX Strategy in 2a/2b has that clear advantage to make it much
simpler on the server side, but as a counterpart, the macro needs to be
distributed as a xar + jar, while in 1) it's a jar only.
2b. Same as 2a using the strategy in 1b for the part in the macro. This
is the way I have my prototype working right now. I admit I don't really
know what to think about the fact I'm building macros blocks (a velocity
one for the jsx "use" call, and a html one for the javascript call)
inside the macro itself. I hope you can tell more about this, and let me
know if it's a bad practice.
3. We don't do such macro :) We consider it's not what wiki macro should
be and we decide to have such macros only as velocity macros which are
much simpler to write in that case. This does not change anything for
the wysiwyg users, as far as I understand, but it does for the wiki
users.{{map location="Paris, France"}} is much more elegant than
{{velocity}}#map("Paris, France"){{/velocity}} ; and is much better too
in terms of configuration (in velocity we would need to give values to
all parameters, even if we want to use default value for most of them).
WDYT ? Are there some variants I did not envisage ?
Regards,
Jerome.
Hi devs,
starting from a Wysiwyg implementation issue, we had a discussion yesterday
about marking links towards new pages in the wiki. Right now, a question mark
('?') text is appended to the end of the link label and coloured properly.
I would like to change this into using exclusively css, for the following reason:
* this question mark represents *styling only*: it's as if we'd colour links
towards new pages with a different color (the way mediawiki does), therefore
this information (either it's a qm, or colour or whatever) should *not appear as
part of the document content*, the way a ? text does (the raw HTML contains it).
One method of doing this in CSS is appending the text itself (with :after
pseudoelement), but that is not cross-browser, and the other method is using an
image for the question mark.
I'm +1 for the image qm for 2 more reasons (besides the cross browser issue):
* this information would not append to document content at all (e.g. if I copied
the rendered document content in an ascii editor, I wouldn't have the ?)
* it is a solution closer to the colour solution or marking the link to a new
page with a non-character sign (see, for example, the way mediawiki marks
external links) -- we can decide to change that anytime and we *don't have to
change rendering rules* which makes very much sense to me.
Here's the issue on JIRA for this solution:
http://jira.xwiki.org/jira/browse/XWIKI-2803
WDYT?
Happy coding,
Anca Luca
On Nov 4, 2008, at 7:36 AM, asiri (SVN) wrote:
> Author: asiri
> Date: 2008-11-04 07:36:17 +0100 (Tue, 04 Nov 2008)
> New Revision: 13945
>
> Added:
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/StyleFixFilter.java
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/TableFixFilter.java
> Log:
> JIRA : http://jira.xwiki.org/jira/browse/XAOFFICE-1
>
> Introduced two fix filters. These filters pre-process the xhtml
> document so that rendering via xwiki 2.0 syntax renderer produces
> acceptable results.
>
> Copied: sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/
> xwiki/plugin/officeimporter/filter/StyleFixFilter.java (from rev
> 13943, sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/
> xwiki/plugin/officeimporter/filter/HtmlStylesFilter.java)
> ===================================================================
> --- sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/
> StyleFixFilter.java (rev 0)
> +++ sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/StyleFixFilter.java 2008-11-04 06:36:17
> UTC (rev 13945)
> @@ -0,0 +1,56 @@
> +package com.xpn.xwiki.plugin.officeimporter.filter;
> +
> +import org.w3c.dom.DOMException;
> +import org.w3c.dom.Document;
> +import org.w3c.dom.Element;
> +import org.w3c.dom.Node;
> +import org.w3c.dom.NodeList;
> +
> +/**
> + * This particular filter searches for {@code <span>} and {@code
> <div>} tags containing style
> + * attributes and removes such attributes if present.
I don't think this is correct. You should filter style attributes for
ALL HTML elements.
I have no idea what your second filter does and whether it should be
done in the office importer or not.
Thanks
-Vincent
> Also, if the resulting {@code <span>} or
> + * {@code <div>} tag has no other attributes, this filter will
> completely rip off the tag itself and
> + * append the content of the tag into it's parent.
> + */
> +public class StyleFixFilter implements HtmlFilter
> +{
> + /**
> + * Tags that contain style information.
> + */
> + private static final String[] styleTags = new String[] {"span",
> "div"};
> +
> + /**
> + * {@inheritDoc}
> + */
> + public void filter(Document document)
> + {
> + Element root = document.getDocumentElement();
> + for (String tagName : styleTags) {
> + NodeList tagList = root.getElementsByTagName(tagName);
> + for (int i = 0; i < tagList.getLength(); i++) {
> + Node tag = tagList.item(i);
> + if (tag.hasAttributes()) {
> + try {
> + tag.getAttributes().removeNamedItem("style");
> + } catch (DOMException ex) {
> + // Not a problem.
> + }
> + }
> + // Check if the tag has no more attributes.
> + if (!tag.hasAttributes()) {
> + // Append the children into parent node.
> + Node parentNode = tag.getParentNode();
> + NodeList grandChildren = tag.getChildNodes();
> + for (int j = 0; j < grandChildren.getLength(); j
> ++) {
> +
> parentNode.appendChild(grandChildren.item(j));
> + }
> + // Get rid of it.
> + parentNode.removeChild(tag);
> + // Removing the tag causes the tag list to
> collapse.
> + // To address this issue, we need to decrement
> the counter.
> + i--;
> + }
> + }
> + }
> + }
> +}
>
>
> Property changes on: sandbox/xwiki-plugin-officeimporter/src/main/
> java/com/xpn/xwiki/plugin/officeimporter/filter/StyleFixFilter.java
> ___________________________________________________________________
> Name: svn:mergeinfo
> +
>
> Added: sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/
> xwiki/plugin/officeimporter/filter/TableFixFilter.java
> ===================================================================
> --- sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/
> TableFixFilter.java (rev 0)
> +++ sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/filter/TableFixFilter.java 2008-11-04 06:36:17
> UTC (rev 13945)
> @@ -0,0 +1,19 @@
> +package com.xpn.xwiki.plugin.officeimporter.filter;
> +
> +import org.w3c.dom.Document;
> +
> +/**
> + * The purpose of this filter is to pre-adjust the html {@code
> <table>} elements so that they are
> + * rendered properly in the xwiki 2.0 syntax renderer. This also
> implies any formatting elements
> + * present in the html which are not compatible with xwiki 2.0
> getting ripped off entirely.
> + */
> +public class TableFixFilter implements HtmlFilter
> +{
> + /**
> + * {@inheritDoc}
> + */
> + public void filter(Document document)
> + {
> +
> + }
> +}
>
> _______________________________________________
> notifications mailing list
> notifications(a)xwiki.org
> http://lists.xwiki.org/mailman/listinfo/notifications
Hi Asiri,
On Nov 4, 2008, at 7:06 AM, asiri (SVN) wrote:
> Author: asiri
> Date: 2008-11-04 07:06:01 +0100 (Tue, 04 Nov 2008)
> New Revision: 13944
>
> Modified:
> sandbox/xwiki-plugin-officeimporter/src/main/java/com/xpn/xwiki/
> plugin/officeimporter/transformer/HtmlToXWikiTwoZeroTransformer.java
> Log:
> JIRA : http://jira.xwiki.org/jira/browse/XAOFFICE-1
>
> Fixes lists support. <p> tags inside <li> tags are not rendered
> properly in xwiki 2.0. Rip them off and it works fine.
I don't think this is correct. Either it's valid XHTML to have <P>
inside <LI> and in which case it's a bug of the XHTML parser or it's
not valid and it's a job for the HTML cleaner.
In both cases I don't think this is related to office imports.
It's really import that whenever you need a transformation/cleaning
that you decide where it should go:
* in the XHTML parser
* in the HTML cleaner
* in the office importer
Thanks
-Vincent
Hi Devs,
When converting office documents into html (before they are transformed into
xwiki syntax) lots of style information gets added into the html. For an
example :
<p class="western">Text in a <sup><span
style="color:;font-family=;font-size=2pt;">superscript</span></sup>format</p>
And finally when this is transformed into xwiki syntax, the result will look
something like :
(% class="western" %)
Text in a ^^(% style="color:;font-family=;font-size=2pt;"
%)superscript^^(%%) format
The problem with this is, the resulting xwiki document will have lot's of
(%%) elements which makes it difficult to make modifications in wiki mode.
And another argument is that content is more important than style (vincent).
So, there are three options :
1. Rip off style information.
2. Keep style information as it is.
3. Give the user an option to select between 1 and 2.
I'm going with 3. :)
WDYT ?
Thanks.
- Asiri
[image: Asiri Rathnayake's Facebook
profile]<http://www.facebook.com/people/Asiri_Rathnayake/534607921>
Hi devs,
We all know that the old XWikiContext is a burden that must still be
carried around, in order to access any non-componentized functionality.
The problem is that a context object is not supposed to be used by more
than one thread. Example of non-threadsafe parts of the context are:
- the Hibernate session
- the request and response objects, cleared by the container
- the velocity context
- the XClass cache
- the current wiki name
- and maybe others
This single-thread restriction is generally acceptable, since most of
the code is executed in the single-threaded request-response workflow.
Yet, some plugins execute in separate threads, for example the Lucene
indexer and the scheduler plugin, and they need their XContext object.
The current strategy is to clone the context and remove some of the
dangerous elements listed above. This is not good, since:
- it has to be done in every plugin that creates threads (duplication)
- adding more non-threadsafe things to the context requires that all
these plugins are changed
- some non-threadsafe things might not yet be identified, and they
surface sometimes as unidentified bugs
- some things cannot be cleared from the outside (for example the XClass
cache, which is a private member of the context)
There are several solutions to this problem:
1. Override the clone() method to remove non-threadsafe elements from
the cloned context.
Pro: removes duplication
Pro: establishes a safe clone method for all the codebase
Con: some unsafe things might be overlooked, surfacing from time to
time in rare thread inter-weaving.
2. Override the clone() method to create a blank context and only copy
what needs to be part of any context.
Pro: same as above, but also eliminates all possible non-threadsafe
elements.
Con: We might overlook something that needs to be part of the context.
The advantage over option 1 is that this is always reproducible, and a
simple stack trace is enough to spot the problem, unlike multithreaded
issues.
Con: We might introduce a regression, this needs to be tested well.
3. Override the clone() method to just eliminate non-threadsafe things
that are unaccessible from outside (the XClass cache is the only one I see).
Pro: keeps the current behavior, reducing the risk of regressions.
Con: Doesn't really solve the problem.
I'd go with option 2. Any other opinions?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
There's something wrong here.
All projects in sandbox should use the SANDBOX JIRA project and only
move to their own jira project when they graduate out of the sandbox.
We have the exact same problem for the XOFFICE JIRA project.
Let's leave them like this for now but please in the future follow
this rule (which we had agreed on previously on the dev list).
Thanks
-Vincent
PS: I should have raised this earlier on but I'm lagging behind on my
mail on the list...
On Nov 3, 2008, at 5:26 PM, jvelociter (SVN) wrote:
> Author: jvelociter
> Date: 2008-11-03 17:26:43 +0100 (Mon, 03 Nov 2008)
> New Revision: 13932
>
> Modified:
> sandbox/plugins/comments/pom.xml
> Log:
> XACOMMENTS-1 Implement the initial feature set of the new comments
> application
>
> * Set the plugin version to 1.0-SNAPSHOT
> * Temporary set the dependency towards xwiki-core to 1.5.2, until I
> resolve the need to have a patched core. (Currently, one need to
> patch the 1.5.2 tag of xwiki-core to be able to build this comments
> plugin.)
>
> Modified: sandbox/plugins/comments/pom.xml
> ===================================================================
> --- sandbox/plugins/comments/pom.xml 2008-11-03 13:18:22 UTC (rev
> 13931)
> +++ sandbox/plugins/comments/pom.xml 2008-11-03 16:26:43 UTC (rev
> 13932)
> @@ -31,7 +31,7 @@
> <version>10-SNAPSHOT</version>
> </parent>
> <artifactId>xwiki-plugin-comments</artifactId>
> - <version>0.1-SNAPSHOT</version>
> + <version>1.0-SNAPSHOT</version>
> <name>XWiki Platform - Plugins - Comments</name>
> <packaging>jar</packaging>
> <description>XWiki Platform - Plugins - Comments</description>
> @@ -39,7 +39,7 @@
> <dependency>
> <groupId>com.xpn.xwiki.platform</groupId>
> <artifactId>xwiki-core</artifactId>
> - <version>1.6-SNAPSHOT</version>
> + <version>1.5.2</version>
> <scope>provided</scope>
> </dependency>
> </dependencies>
I think it would be nice to look at Compass (http://www.compass-project.org/overview.html
) or Hibernate Search (http://www.hibernate.org/410.html) for the
future.
I think Compass is better for us since we don't want to rely on
Hibernate for our storage in the future.
Here are some features of Compass:
"
- Simple Compass provides a simple API for working with Lucene. If you
know how to use an ORM, then you will feel right at home with Compass
with simple operations for save, and delete & query.
- Lucene Building on top of Lucene, Compass simplifies common usage
patterns of Lucene such as google-style search, index updates as well
as more advanced concepts such as caching and index sharding (sub
indexes). Compass also uses built in optimizations for concurrent
commits and merges.
- Mapping Compass provides support for mapping of different data
"formats" - indexing and storing (caching) them in the Search Engine:
Object to Search Engine Mapping (using annotations or xml), XML to
Search Engine Mapping (using simple xpath expressions), and the low
level Resource to Search Engine Mappping.
- Tx Compass provides a transactional API on top of the Search Engine
supporting different transaction isolation levels. Externally, Compass
provides a local transaction manager as well as integration with
external transaction managers such as JTA (Sync and XA), Spring, and
ORM ones.
- ORM Compass integrates seamlessly with most popular ORM frameworks
allowing automatic mirroring, to the index, of the changes in data
performed via the ORM tool. Compass has generic support for JPA as
well as embedded support for Hibernate, OpenJPA, TopLink Essentials,
and EclipseLink allow to add Compass using three simple steps.
- Spring Compass integrates seamlessly with Spring. Compass can be
easily configured using Spring, integrates with Spring transaction
management, has support for Spring MVC, and has Spring aspects built
in for reflecting operations to the search engine.
- Distributed Compass simplifies the creation of distributed Lucene
index by allowing to store the Lucene index in a database, as well as
storing the index simply with Data Grid products such as GigaSpaces,
Coherence and Terracotta.
"
The last point is especially important for our distributed lucene
searcg feature developed during the GSOC.
Thanks
-Vincent
Hi devs,
We need to create a jira project for the MS Office integration. Jerome
has proposed to put in "XWiki Core & Products" section, with the
"*XOFFICE*" key. WDYT?
The first Add-in in the suite will be the Word Add-in and is currently
named *XWriter*( XWord is another option). I target the first milestone
release on November 19th, and a first final version somewhere around XE
1.8 final release date.
The solution also contains a project named *XWikiLib*. This is an
assembly that will contain most logic for the server connection and
other utilities. It might be useful for other members of the community
that may want to connect to XWiki from a .NET client.
Regards,
Florin Ciubotaru
Could be a good idea to upgrade... Maybe we could try it during the
next release?
Thanks
-Vincent
Begin forwarded message:
> From: "Olivier Lamy" <olamy(a)apache.org>
> Date: October 30, 2008 11:42:18 PM CEST
> To: "Maven Users List" <users(a)maven.apache.org>, announce(a)maven.apache.org
> Cc: "Maven Developers List" <dev(a)maven.apache.org>
> Subject: [ANN] Maven Release Plugin 2.0-beta-8 Released
> Reply-To: "Maven Developers List" <dev(a)maven.apache.org>
>
> The Maven team is pleased to announce the release of the Maven Release
> Plugin, version 2.0-beta-8.
>
> http://maven.apache.org/plugins/maven-release-plugin/
>
> You should specify the version in your project's plugin configuration:
>
> <plugin>
> <groupId>org.apache.maven.plugins</groupId>
> <artifactId>maven-release-plugin</artifactId>
> <version>2.0-beta-8</version>
> </plugin>
>
> Release Notes - Maven 2.x Release Plugin - Version 2.0-beta-8
>
>
> ** Bug
> * [MRELEASE-87] - Poms are written with wrong encodings
> * [MRELEASE-188] - release:perform is not updating some modules to
> the next version identifier correctly.
> * [MRELEASE-201] - Deployed POM is not valid XML
> * [MRELEASE-221] - XML header missing in modified POM after
> release:prepare
> * [MRELEASE-223] - Generated pom.xml has invalid chars (does not
> correctly handle xml entities)
> * [MRELEASE-254] - tests failed on windows
> * [MRELEASE-255] - during a release several elements are removed
> from the pom.xml (which should be left there)
> * [MRELEASE-267] - Whitespaces in artifactId or groupId prevent
> version update
> * [MRELEASE-268] - Release is broken with Subversion 1.3.x and
> earlier
> * [MRELEASE-302] - Test don't pass on windows due to encoding
> issues
> * [MRELEASE-305] - release:prepare forgets a slash when changing
> the <scm> urls
> * [MRELEASE-337] - generated command line must remove newLine
> characters
> * [MRELEASE-351] - xml declaration removed on release
> * [MRELEASE-355] - Deploying from Leopard, with Svn 1.4.4 has
> error on automated Svn commit
> * [MRELEASE-360] - NullPointerException in release
> * [MRELEASE-364] - InvokerMavenExecutor fails with NPE if
> additional arguments are not set
> * [MRELEASE-365] - ForkedMavenExecutor fails with NoSuchMethodError
> * [MRELEASE-366] - Using InvokerMavenExecutor fails in combination
> with space in paths
>
> ** Improvement
> * [MRELEASE-173] - Allow command line specification of versions
> * [MRELEASE-321] - Add support for -DdevelopmentVersion and
> -DreleaseVersion to facilitate command line configuration
> * [MRELEASE-341] - support release process that use a staging
> repository
> * [MRELEASE-345] - Keep comments in rewritten elements
> * [MRELEASE-359] - Release plugin depends on mvn being in the path
> of the shell that started the current build
> * [MRELEASE-382] - Specifying workingDirectory as system property
> on CL is not picked up by release:perform
>
> ** New Feature
> * [MRELEASE-369] - upgrade scm version to last 1.1 (and add by
> default new providers accurev and git)
>
> ** Task
> * [MRELEASE-316] - remove copy of plexus-utils' XML encoding
> support sources
>
>
> ** Wish
> * [MRELEASE-313] - add an option to set the profile(s) used to
> perform the release
>
> Have fun !
>
> -The Maven team
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe(a)maven.apache.org
> For additional commands, e-mail: dev-help(a)maven.apache.org
>
I have configured Xwiki (release 1.6-milestone-2.12601) to authenticate
against my OpenLDAP (v2.3.35). I am continuously getting a "Wrong user name"
message on my UI. On investigating my ldap logs, I found that Xwiki first
authenticates successfully to OpenLDAP with the users id & password.
However it then tries to do a lookup of the user (I assume for the details
of the user), and at that time, it does not seem to be passing the base DN
in the request. In such scenarios OpenLDAP returns a "No such object" error.
I tried to do a test using ldapsearch without passing the base, and I got
the same error. Also, the error did not occur when I passed the base
parameter to ldapsearch.
I am trying to trace through this problem in the source, but meanwhile,
would like some help in figuring out whether my configuration is wrong, or
if someone has encountered a similar problem before.
Regards,
Milan...
----------------------------------------------------------------------------
-------------------------------------------------------
Xwiki.cfg - LDAP Section
----------------------------------------------------------------------------
--------------------------------------------------------
#-# new LDAP authentication service
xwiki.authentication.authclass=com.xpn.xwiki.user.impl.LDAP.XWikiLDAPAuthSer
viceImpl
#-# Turn LDAP authentication on - otherwise only XWiki authentication
#-# 0: disable
#-# 1: enable
xwiki.authentication.ldap=1
#-# LDAP Server (Active Directory, eDirectory, OpenLDAP, etc.)
xwiki.authentication.ldap.server=ldap-slave
xwiki.authentication.ldap.port=389
#-# base DN for searches
xwiki.authentication.ldap.base_DN=dc=<mycompany>,dc=<mycountry>
#-# LDAP login, empty = anonymous access, otherwise specify full dn
#-# {0} is replaced with the username, {1} with the password
#xwiki.authentication.ldap.bind_DN=cn=cname#,department=USER,department=INFO
RMATIK,department=1230,o=MP
#xwiki.authentication.ldap.bind_DN=cn=Manager,department=USER,department=INF
ORMATIK,department=1230
xwiki.authentication.ldap.bind_DN=cn=Manager,dc=<mycompany>,dc=<mycountry>
xwiki.authentication.ldap.bind_pass=<dummy>
#-# Force to check password after LDAP connection
#-# 0: disable
#-# 1: enable
xwiki.authentication.ldap.validate_password=0
#-# only members of the following group will be verified in the LDAP
#-# otherwise only users that are found after searching starting from the
base_DN
#
xwiki.authentication.ldap.user_group=cn=developers,ou=groups,o=MegaNova,c=US
#xwiki.authentication.ldap.user_group=ou=People,dc=<mycompany>,dc=<mycountry
>
#-# [SINCE 1.5RC1, XWikiLDAPAuthServiceImpl]
#-# only users not member of the following group can autheticate
# xwiki.authentication.ldap.exclude_group=cn=admin,ou=groups,o=MegaNova,c=US
#-# Specifies the LDAP attribute containing the identifier to be used as the
XWiki name (default=cn)
xwiki.authentication.ldap.UID_attr=uid
#-# [SINCE 1.5M1, XWikiLDAPAuthServiceImpl]
#-# Specifies the LDAP attribute containing the password to be used "when
xwiki.authentication.ldap.validate_password" is set to 1
xwiki.authentication.ldap.password_field=userPassword
#-# [SINCE 1.5M1, XWikiLDAPAuthServiceImpl]
#-# The potential LDAP groups classes. Separated by commas.
xwiki.authentication.ldap.group_classes=posixGroup
#-# [SINCE 1.5M1, XWikiLDAPAuthServiceImpl]
#-# The potential names of the LDAP groups fields containings the members.
Separated by commas.
xwiki.authentication.ldap.group_memberfields=memberUid
#-# retrieve the following fields from LDAP and store them in the XWiki user
object (xwiki-attribute=ldap-attribute)
#-# ldap_dn=dn -- dn is set by class, caches dn in XWiki.user object for
faster access
#xwiki.authentication.ldap.fields_mapping=last_name=sn,first_name=givenName,
fullname=cn,email=mail,ldap_dn=dn
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# on every login update the mapped attributes from LDAP to XWiki otherwise
this happens only once when the XWiki account is created.
xwiki.authentication.ldap.update_user=1
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# mapps XWiki groups to LDAP groups, separator is "|"
#
xwiki.authentication.ldap.group_mapping=XWiki.XWikiAdminGroup=cn=AdminRole,o
u=groups,o=MegaNova,c=US|\
#
XWiki.Organisation=cn=testers,ou=groups,o=MegaNova,c=US
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# time in s after which the list of members in a group is refreshed from
LDAP (default=3600*6)
# xwiki.authentication.ldap.groupcache_expiration=21800
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# - create : synchronize group membership only when the user is first
created
#-# - always: synchronize on every login
xwiki.authentication.ldap.mode_group_sync=always
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# if ldap authentication fails for any reason, try XWiki DB authentication
with the same credentials
xwiki.authentication.ldap.trylocal=1
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# SSL connection to LDAP server
#-# 0: normal
#-# 1: SSL
xwiki.authentication.ldap.ssl=0
#-# [SINCE 1.3M2, XWikiLDAPAuthServiceImpl]
#-# The keystore file to use in SSL connection
# xwiki.authentication.ldap.ssl.keystore=
#-# [SINCE 1.5M1, XWikiLDAPAuthServiceImpl]
#-# The java secure provider used in SSL connection
#
xwiki.authentication.ldap.ssl.secure_provider=com.sun.net.ssl.internal.ssl.P
rovider
----------------------------------------------------------------------------
-------------------------------------------------------
OpenLDAP Log output (invoked from xwiki)
----------------------------------------------------------------------------
--------------------------------------------------------
=> ldap_dn2bv(272)
<= ldap_dn2bv(cn=manager,dc=<mycompany>,dc=<mycountry>)=0
<<< dnPrettyNormal: <cn=Manager,dc=<mycompany>,dc=<mycountry>>,
<cn=manager,dc=<mycompany>,dc=<mycountry>>
do_bind: version=3 dn="cn=Manager,dc=<mycompany>,dc=<mycountry>" method=128
==> bdb_bind: dn: cn=Manager,dc=<mycompany>,dc=<mycountry>
do_bind: v3 bind: "cn=Manager,dc=<mycompany>,dc=<mycountry>" to
"cn=Manager,dc=<mycompany>,dc=<mycountry>"
send_ldap_result: conn=5 op=0 p=3
send_ldap_result: err=0 matched="" text=""
send_ldap_response: msgid=13 tag=97 err=0
ber_flush: 14 bytes to sd 19
connection_get(19)
connection_get(19): got connid=5
connection_read(19): checking for input on id=5
ber_get_next
ber_get_next: tag 0x30 len 14 contents:
ber_get_next
do_extended
ber_scanf fmt ({m) ber:
do_extended: unsupported operation "0.0.0.0"
send_ldap_result: conn=5 op=1 p=3
send_ldap_result: err=2 matched="" text="unsupported extended operation"
send_ldap_response: msgid=14 tag=120 err=2
ber_flush: 44 bytes to sd 19
connection_get(19)
connection_get(19): got connid=5
connection_read(19): checking for input on id=5
ber_get_next
ber_get_next: tag 0x30 len 40 contents:
ber_get_next
do_search
ber_scanf fmt ({miiiib) ber:
>>> dnPrettyNormal: <>
<<< dnPrettyNormal: <>, <>
SRCH "" 2 0 1000 0 0
begin get_filter
EQUALITY
ber_scanf fmt ({mm}) ber:
end get_filter 0
filter: (uid=mmehta)
ber_scanf fmt ({M}}) ber:
attrs:
send_ldap_result: conn=5 op=2 p=3
send_ldap_result: err=10 matched="" text=""
send_ldap_response: msgid=15 tag=101 err=32
----------------------------------------------------------------------------
-------------------------------------------------------
OpenLDAP Log output (invoked by ldapsearch with base parameter specified)
----------------------------------------------------------------------------
--------------------------------------------------------
<<< dnPrettyNormal: <cn=Manager,dc=<mycompany>,dc=<mycountry>>,
<cn=manager,dc=<mycompany>,dc=<mycountry>>
do_bind: version=3 dn="cn=Manager,dc=<mycompany>,dc=<mycountry>" method=128
==> bdb_bind: dn: cn=Manager,dc=<mycompany>,dc=<mycountry>
do_bind: v3 bind: "cn=Manager,dc=<mycompany>,dc=<mycountry>" to
"cn=Manager,dc=<mycompany>,dc=<mycountry>"
send_ldap_result: conn=10 op=0 p=3
send_ldap_result: err=0 matched="" text=""
send_ldap_response: msgid=1 tag=97 err=0
ber_flush: 14 bytes to sd 21
connection_get(21)
connection_get(21): got connid=10
connection_read(21): checking for input on id=10
ber_get_next
ber_get_next: tag 0x30 len 58 contents:
ber_get_next
do_search
ber_scanf fmt ({miiiib) ber:
>>> dnPrettyNormal: <dc=<mycompany>,dc=<mycountry>>
=> ldap_bv2dn(dc=<mycompany>,dc=<mycountry>,0)
<= ldap_bv2dn(dc=<mycompany>,dc=<mycountry>)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(dc=<mycompany>,dc=<mycountry>)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(dc=<mycompany>,dc=<mycountry>)=0
<<< dnPrettyNormal: <dc=<mycompany>,dc=<mycountry>>,
<dc=<mycompany>,dc=<mycountry>>
SRCH "dc=<mycompany>,dc=<mycountry>" 2 0 0 0 0 ************** This is the
place where the base is available **********************
begin get_filter
EQUALITY
ber_scanf fmt ({mm}) ber:
end get_filter 0
filter: (uid=mmehta)
ber_scanf fmt ({M}}) ber:
attrs:
=> bdb_search
bdb_dn2entry("dc=<mycompany>,dc=<mycountry>")
entry_decode: "dc=<mycompany>,dc=<mycountry>"
<= entry_decode(dc=<mycompany>,dc=<mycountry>)
search_candidates: base="dc=<mycompany>,dc=<mycountry>" (0x00000001) scope=2
=> bdb_dn2idl("dc=<mycompany>,dc=<mycountry>")
=> bdb_filter_candidates
AND
=> bdb_list_candidates 0xa0
=> bdb_filter_candidates
OR
=> bdb_list_candidates 0xa1
=> bdb_filter_candidates
EQUALITY
=> bdb_equality_candidates (objectClass)
=> key_read
bdb_idl_fetch_key: [b49d1940]
<= bdb_index_read: failed (-30990)
<= bdb_equality_candidates: id=0, first=0, last=0
<= bdb_filter_candidates: id=0 first=0 last=0
=> bdb_filter_candidates
EQUALITY
=> bdb_equality_candidates (uid)
=> key_read
bdb_idl_fetch_key: [b5212845]
<= bdb_index_read 1 candidates
<= bdb_equality_candidates: id=1, first=37, last=37
<= bdb_filter_candidates: id=1 first=37 last=37
<= bdb_list_candidates: id=1 first=37 last=37
<= bdb_filter_candidates: id=1 first=37 last=37
<= bdb_list_candidates: id=1 first=37 last=37
<= bdb_filter_candidates: id=1 first=37 last=37
bdb_search_candidates: id=1 first=37 last=37
=> test_filter
EQUALITY
<= test_filter 6
=> send_search_entry: conn 10
dn="uid=mmehta,ou=People,dc=<mycompany>,dc=<mycountry>"
ber_flush: 769 bytes to sd 21
<= send_search_entry: conn 10 exit.
send_ldap_result: conn=10 op=1 p=3
send_ldap_result: err=0 matched="" text=""
send_ldap_response: msgid=2 tag=101 err=0
Hi there,
Microsoft just announced at the PDC that Windows Live ID becomes an OpenID
Provider [1]. This will create millions of new OpenID users to the already
existing ones of Yahoo, AOL, Blogger, Flickr, and so on.
I can hardly wait to see my OpenID support to be integrated into the next
XWiki release :-)
Cheers,
Markus
[1] http://dev.live.com/blogs/devlive/archive/2008/10/27/421.aspx