Hi Marius,
This works, but my problem is that I have a xwiki page, I have a link on
this page, when I click on this link, a javascript function is called, with
the call, I want to include a new xwiki page at the bottom of the current
page without reloading the current old content (like appending a new page
to the current page without change of the old content), the new appended
content could be in edit or view mode.
My question is the approach you have here is to use display macro, how to
trigger this macro inside javascript code? also how to avoid reloading the
old content when appending new content? since in the old page I have some
search results listed, I do not want to lose it and make users have to
search again.
Thanks for your help.
David
On Mon, Nov 5, 2012 at 3:33 AM, Marius Dumitru Florea <
mariusdumitru.florea(a)xwiki.com> wrote:
> The display macro works for me as you would expect. For instance, if I
> create a page with this content:
>
> ----------8<----------
> {{velocity}}
> Current action: $xcontext.action
> {{/velocity}}
>
> {{display reference="Blog.BlogIntroduction"/}}
> ---------->8----------
>
> in view mode I can see the blog post preceded by "Current action:
> view" and in "Inline Form" edit mode I can edit the blog post, which
> is preceded by "Current action: edit".
>
> Hope this helps,
> Marius
>
> On Fri, Nov 2, 2012 at 10:18 PM, Geo Du <dddu88(a)gmail.com> wrote:
> > Hi Marius,
> >
> > Thanks for your response, it works when I use display macro to include
> the
> > testpage into the other page, but I need also to include (display) the
> page
> > with inline mode inside the other page, since user can click the edit
> pencil
> > button on the right corner of the testpage to edit it, right now the edit
> > button will lead to the testpage in inline mode but the testpage is not
> > inside the other page which originally include(display) the testpage.
> >
> > So how to include or display a page with inline mode into another page?
> >
> > Thanks for your help.
> >
> > David
> >
> > On Fri, Nov 2, 2012 at 3:06 AM, Marius Dumitru Florea
> > <mariusdumitru.florea(a)xwiki.com> wrote:
> >>
> >> On Thu, Nov 1, 2012 at 10:51 PM, Geo Du <dddu88(a)gmail.com> wrote:
> >> >
> >> > Hi All,
> >> >
> >> > I want to include one page into another page in terms of content
> instead
> >> > of
> >> > velocity code, for example, the Blog.WebHome is a page without
> velocity
> >> > code if you choose Edit->Wiki, but it has Blog.BlogClass if you select
> >> > Edit->Objects, from Blog.WebHome page, I can create a new post with
> tile
> >> > testpage, now the Blog.testpage is the new page created that I need to
> >> > include into another page, this testpage has no velocity code from
> >> > Edit_>Wiki. so how to include that page into a different page?
> >> >
> >>
> >> > I tried: include Macro, includeInContext Macro, includeTopic Macro,
> none
> >> > of
> >> > them displays the testpage for me, any idea?
> >>
> >> "display" is the key. You want to display not to include. See
> >> http://extensions.xwiki.org/xwiki/bin/view/Extension/Display+Macro .
> >>
> >> Hope this helps,
> >> Marius
> >>
> >> >
> >> > Thanks very much for your help.
> >> >
> >> > David
> >> > _______________________________________________
> >> > devs mailing list
> >> > devs(a)xwiki.org
> >> > http://lists.xwiki.org/mailman/listinfo/devs
> >> _______________________________________________
> >> devs mailing list
> >> devs(a)xwiki.org
> >> http://lists.xwiki.org/mailman/listinfo/devs
> >
> >
>
fyi.
Reminder: we have a license for the xwiki project.
Thanks
-Vincent
Begin forwarded message:
> From: YourKit Information Service <info(a)yourkit.com>
> Subject: YourKit Java Profiler 12 released
> Date: December 14, 2012 11:50:49 AM GMT+01:00
> To: vincent(a)massol.net
>
> Greetings,
>
> We are glad to announce immediate availability of YourKit Java Profiler 12
> released on December 3, 2012.
>
> It can be downloaded at http://www.yourkit.com/download/
>
> MOST NOTABLE CHANGES AND NEW FEATURES:
> ======================================
>
> NEW PLATFORMS SUPPORTED:
>
> - Linux on ARM
>
> - Linux on PPC
>
> CPU PROFILING:
>
> - Tracing overhead significantly reduced: profiled applications run
> up to 10%-50% faster than with the previous profiler version, due to
> new adaptive tracing mode and optimizations
>
> - Tracing accuracy increased
>
> - Reworked tracing and sampling settings
>
> MEMORY PROFILING:
>
> - New feature: "Class tree" view which is similar to "Class list", but shows classes
> grouped by package
>
> - New feature: memory views such as Class list allow selection of multiple rows
>
> - "Duplicate strings" inspection: the results are shown under a new grouping root node
> which presents the total waste in all the duplicate strings
>
> - Optimization: snapshots with big primitive arrays are opened faster
>
> - Optimization: performance of "Incoming References" view has been dramatically improved
>
> - Improved calculation of exact retained size in "Class list" and similar views:
> more items are processed per click if calculation speed allows
>
> - Improvement: available CPU cores are used for parallel computations in:
> - Class list
> - Class tree
> - Generations
> - Reachability scopes
> - Class loaders
> - Web applications
> - Object ages
>
> - Improvement: "Calculate exact retained sizes" action uses available CPU cores
> to perform calculation in parallel
>
> - Optimization: allocation recording overhead has been reduced for multithreaded
> applications: code being profiled runs up to 30% faster when each 10th object
> is recorded (the default setting), and up to 70% faster when each 100th object
> is recorded, comparing with the previous version
>
> - Improvement: web application context path (URL) is now shown in addition to the name
>
> - Web applications: added support of Jetty (versions 6, 7, 8)
>
> - Class instance count telemetry: scalability improved
>
> TELEMETRY:
>
> - CPU usage telemetry: kernel CPU time is shown as a separate curve,
> in addition to the main user + kernel CPU time graph
>
> - Graph rendering has been optimized, making UI much more responsive, especially
> when using bigger scales
>
> PROBES:
>
> - New feature: ability to clear tables.
> Get rid of older events you are not interested in anymore,
> or give space for new events if the table capacity limit has been reached.
>
> - "Probes" tab layout has been changed to gives more vertical space for browsing
> event lists, and make the UI more consistent
>
> - Class loading probe can be optionally disabled
>
> IDE INTEGRATION:
>
> - Eclipse, IntelliJ IDEA, NetBeans 7.0 and newer plugin
> automatically detects 32-bit and 64-bit JVMs, instead of relying on user input
>
> - Eclipse: Maven run configurations supported in Eclipse 3.7 and newer
>
> - IntelliJ IDEA 12 supported
>
> - NetBeans 7.3 supported
>
> J2EE INTEGRATION:
>
> - J2EE integration wizard: added Jetty 6 and newer support
>
> USER INTERFACE:
>
> - Improvement: the left vertical tab group avoids scroller if many tabs are opened
>
> - Added an quick way to switch between applying and not applying filters in UI
>
> - Added support of high-contrast color schemes
>
> - Call tree and back traces views: added popup menu item to expand selected node
> down to 5 levels, as a supplement to the existing item which expands the node fully
>
> MISCELLANEOUS:
>
> - Export with command line: class list is exported for performance snapshots too
> (as seen in Memory tab | Class list)
>
> - Agent: log file name now contains the session name to better separate logs from
> different applications
>
> - Agent: added an option to store logs from several runs of the same application
> in a series of log files named <session name>.<running number>.log
> This mode can be useful when profiling applications such as servers,
> when having a united log is better than having a separate log for each server start.
>
> - Agent: Groovy 2.0 supported
>
> - Other bug fixes and improvements
>
> See complete list of changes at http://www.yourkit.com/changes/
>
> Kindest regards,
> YourKit Team
>
> ____________________________________________________________
> If you would not like to receive any more information about
> YourKit Java Profiler, simply send an e-mail to info(a)yourkit.com
> with the subject line "unsubscribe".
Hi devs,
We have too many test failures on http://ci.xwiki.org/view/Functional%20Tests/ and too many emails sent by Jenkins on the list.
It has become a nightmare and it's impossible to perform a release anymore with a good confidence it's going to work.
This is all the more bad that we're ending the 4.x cycle.
Thus I propose to do the following:
* Don't release 4.4M1 till all tests are passing with no more flickers (say the tests should all pass during 10 full builds for example)
* Create a Commando unit in charge of solving the flickers. Since I've already discussed this with Marius I propose that Marius and myself be the first 2 members. If anyone else would like to help please reply to this mail and join us.
* This commando unit gives itself 1 full week to solve the flickers (ie till the 21st of December). We'll decide what to do next if we fail to achieve our goal after that deadline.
* We start by creating a branch for 4.4M1 so that we isolate ourselves from the rest of the devs who continue to work for 4.4RC1 (reminder: only important bug fixes should go in 4.4RC1)
* When we have fixed all flickers on the 4.4M1 branch we merge the changes to both master and the stable-4.3 branch
* At the end of next week we also propose a strategy so that this mess doesn't happen again in the future
WDYT?
Thanks
-Vincent
Note: We need to release 4.3.1 ASAP so this strategy above will not apply to 4.3.1. For 4.3.1 Edy will need to figure out if all the failing tests are real issues or test issues. I think Edy could do this by a combination of running them locally and doing some manual tests where they also fail locally. Edy WDYT?
Hi devs,
4.4 and 4.5 go together since they contain the leftover that we wish to do for the 4.x cycle.
Could all developers please edit http://enterprise.xwiki.org/xwiki/bin/view/Main/Roadmap for both 4.4 and 4.5 and spread between 4.4 and 4.5 the JIRAs that they wish/need to fix for the 4.x cycle?
So we need:
* All JIRA created
* JIRA listed in that roadmap page
* Committers assigned to them
It's important that we have a good vision of the leftover and ensure we can reach the targeted dates.
Reminder: AWM and EM need to be working well and be production ready at the end of 4.5. I think it's the case for AWM (even if there could have been some improvements) but for EM it has still not achieved our goal of "be able to update a wiki farm in a few minutes". I think Thomas and Marius should really focus on this for the remaining time they have.
I'd also like to propose dates for the 4.5 release:
* 4.5M1: 14th of Jan 2013
* 4.5RC1: 21st of Jan 2013
* 4.5 Final: 4th of February 2013
WDYT?
We can do it! :)
-Vincent
Hello,
Right now all code in the lucene plugin is exposed as API while almost
none of it is actual API.
I would like to move all lucene plugin classes to an internal package,
except for "LucenePluginApi". For the non API "LucenePlugin", I'm not
sure, since moving it would break users conf (xwiki.cfg).
WDYT ?
My +1,
Jerome
Hi devs,
I'd like to propose that we stop shading Rendering Standalone. The reasons are:
1) It's far from perfect. For example we have at least 3 libs we cannot shade:
<!-- We don't relocate the following packages since they cause runtime issues:
- javax.xml
- org.w3c
- org.apache.xerces
-->
2) As we added some libs to our deps we forgot to relocate them so right now we don't shade (to cite a few): com.steadystate.css, javax.validation, ant, aspectj, slf4j, etc
3) There are lots of resources coming from dependent jars and those are not shaded. For example:
283 Tue Dec 04 18:50:42 CET 2012 javacc.class
286 Tue Dec 04 18:50:42 CET 2012 jjdoc.class
235 Tue Dec 04 18:50:42 CET 2012 jjtree.class
0 Tue Dec 04 18:50:42 CET 2012 org/xwiki/shaded/javacc/
or
3783 Tue Dec 04 18:50:42 CET 2012 org/xwiki/shaded/javacc/utils/JavaFileGenerator.class
3693 Tue Dec 04 18:50:42 CET 2012 templates/CharStream.template
15990 Tue Dec 04 18:50:42 CET 2012 templates/JavaCharStream.template
867 Tue Dec 04 18:50:42 CET 2012 templates/MultiNode.template
1317 Tue Dec 04 18:50:42 CET 2012 templates/Node.template
5962 Tue Dec 04 18:50:42 CET 2012 templates/ParseException.template
12711 Tue Dec 04 18:50:42 CET 2012 templates/SimpleCharStream.template
3227 Tue Dec 04 18:50:42 CET 2012 templates/SimpleNode.template
4005 Tue Dec 04 18:50:42 CET 2012 templates/Token.template
368 Tue Dec 04 18:50:42 CET 2012 templates/TokenManager.template
4244 Tue Dec 04 18:50:42 CET 2012 templates/TokenMgrError.template
48 Tue Dec 04 18:50:42 CET 2012 version.properties
So I'd like to keep a standalone distribution to make it easy to test XWiki Rendering but without any shading.
Here's my +1
Thanks
-Vincent
Hi devs,
We've agreed on the list of databases and browser we want to support but I couldn't find an agreement on the screen resolutions we want to support.
I had in mind the 1280x1024 as the minimal resolution for laptop/desktop computers.
Note that it's important to know this for the following reasons:
* When we do our tests we should do them with various resolutions but especially with the minimum supported resolution
* This is true for our automated tests (we run a vnc server in a given resolution on the jenkins agents) but also for the manual tests done by Sorin/Silvia/Manuel and everyone else
WDYT?
Once we agree I'll post the result on xwiki.org
Thanks
-Vincent
This message is in response to Sergiu Dumitriu
I resent it to revive this thread.
>- While I agree that being able to sort objects is important, I'd rather
> see this in a patched version of the $sorttool instead of a separate
>component; I've checked it and we'd just have to change the way
>getComparable works. Doing this means that we're going to reuse a tool
>that we're already supposed to be using for general sorting, and we'd
>get as benefits the ability to specify a sort direction, and the ability
>to mix properties and other object metadata, like owner document or
>object number.
I agree that the functionality in ObjectSort should go into $sorttool, but
I have no influence over the development of XWiki, so I advertised it as a
contrib project.
What is getComparable by the way?
>- Even as it is right now, I don't like that the script service only has
>a getInstance method and then exposes the internal object.
I thought that if I put just getInstance in the script service, I could
reduce redundancy in the code.
But I'm only a starting programmer. There are certainly better designs than
mine.
>- I don't think that both sort and sortCopy are needed. Only the method
>that sorts a copy and returns it should be available.
If only the method that sort a copy and returns it is available, there can
be increased memory usage, so I introduced an in-place sort method, too.
Why do you think an in-place sort method should not be available?
>- sortCopyByProps should also be called just sort, since the type of
>parameters passed will be enough for the right method to be called
>(polymorphism). And I'm actually wondering if we need both methods or
> not, since sorting by just one property could be done by passing a list
>with just one member.
You're right. I thought $services.objectsort.
getInstance().sortByProps($doc.getObjects("Class"), ["port", "day:asc",
"hour:desc"]) was possible but
$services.objectsort.getInstance().sortByProps($doc.getObjects("Class"),
"port") wasn't possible because I thought String couldn't be cast to
List<String>.
However,
$services.objectsort.getInstance().sortByProps($doc.getObjects("Class"),
"port") just worked out of box.
I'll remove sort and sortCopy and rename sortByProps and sortCopyByProps to
sort and sortByProps, respectively.
Hello devs,
This mail following a discussion we've had with Eduard on IRC concerning
the indexing of object property values. On the current Solr
implementation as well as on our lucene plugin, all property values are
stored as text/strings. I've expressed the idea that we probably want to
store each object's property in a field that matches the XClass property
field type. For example, store integers in integer field types, double
as doubles, etc.
My personnal use case is to store geo objects (for example long/lat
coordinates), but I think this has value for other types, numbers for
example (it means you can use those numbers as such when querying for
instance).
Now this will increase the complexity of querying since you would have
for example property_text, property_integer, property_double, etc. vs.
just propertvalue. Again, I think this complexity should be hidden by
the "expending API" Paul mentionned in the mail regarding document
translations.
WDYT ?
Jerome
Hello,
I'm making a new Holiday Request Application, and I'm about to finish
the first version. So I would like to have a github repo on xwiki-contrib
for it. I would also need a maven groupId and a page for the project on
extension.xwiki.org so I could describe it a bit further. You can already
see the code of the application on my own github repo :
https://github.com/tdelafosse/holiday-request.
Thanks,
Thomas
Hi devs,
This issue has already been previously [1] discussed during the GSoC
project, but I am not particularly happy with the chosen approach.
When handling multiple languages, there are generally[2][3] 3 different
approaches:
1) Indexing the content in a single field (like title, doccontent, etc.)
- This has the advantage that queries are clear and fast
- The disadvantage is that you can not run very well tuned analyzers on the
fields, having to resort to (at best) basic tokenization and lowercasing.
2) Indexing the content in multiple fields, one field for each language
(like title_en, title_fr, doccontent_en, doccontent_fr, etc.)
- This has the advantage that you can easily specify (as dynamic fields)
that *_en fields are of type text_en (and analyzed by an english-centered
chain of analyzers); *_fr of type text_fr (focused on french, etc.), thus
making the results much better.
- The disadvantage is that querying such a schema is a pain. If you want
all the results in all languages, you end up with a big and expensive
query. If you want just some language, you have to read the right fields
(ex title_en) instead of just getting a clear field name (title).
-- Also, the schema.xml definition is a static one in this concern,
requiring you to know beforehand which languages you want to support (for
example when defining the default fields to search for). Adding a new
language requires you to start editing the xml files by hand.
3) Indexing the content in different Solr cores (indexes), one for each
language. Each core requires it's on directory and configuration files.
- The advantage is that queries are clean to write (like option 1) and that
you have a nice separation
- The disadvantage is that it's difficult to get it right (administrative
issues) and then you also have the (considerable) problem of having to fix
the relevancy score of a query result that has entries from different
cores; each core has it's own relevancy computed and does not consider the
others.
- To make it even worst, it seems that you can not [5] also push to a
remote Solr instance the configuration files when creating a new core
programatically. However, if we are running an embedded Solr instance, we
could provide a way to generate the config files and write them to the data
directory.
Currently I have implemented option 1) in our existing Solr integration,
which is also more or less compatible with our existing Lucene queries, but
I would like to find a better solution that actually analyses the content.
During GSoC, option 2) was preferred but the implementation did not
consider practical reasons like the ones described above (query complexity,
user configuration, etc.)
On a related note, I have also watched an interesting presentation [3]
about how Drupal handles its Solr integration and, particularly, a plugin
[4] that handles the multilingual aspect.
The idea seen there is that you have this UI that helps you generate
configuration files, depending you your needs. For instance, you (admin)
check that you need search for language English, French and German and the
ui/extension gives you a zip with the configuration you need to use in your
(remote or embedded) solr instance. The configuration for each language
comes preset with the analyzers you should use for it and the additional
resources (stopwords.txt, synonims.txt, etc.).
This approach helps with avoiding the need for admins to be forced to edit
xml files and could also still be useful for other cases, not only option
2).
All these problems basically come from the fact that there is no way to
specify in the schema.xml that, based on the value of a field (like the
field "lang" that stores the document language), you want to run this or
that group of analyzers.
Perhaps a solution would be a custom kind of "AggregatorAnalyzer" that
would call other analyzers at runtime, based on the value of the lang
field. However, this solution could only be applied at index time, when you
have the lang information (in the solrDocument to be indexed), but when you
perform the query, you can not analyze the query text since you do not know
the language of the field you're querying (it was determined at runtime -
at index time) and thus do not know what operations to apply to the query
(to reduce it to the same form as the indexed values).
I have also read another interesting analysis [6] on this problem that
elaborates on the complexities and limitations of each options. (Ignore the
Rosette stuff mentioned there)
I have been thinking about this for some time now, but the solution is
probably somewhere in between, finding an option that is acceptable while
not restrictive. I will probably also send a mail on the Solr list to get
some more input from there, but I get the feeling that whatever solution we
choose, it will most likely require the users to at least copy (or even
edit) some files into some directories (configurations and/or jars), since
it does not seem to be easy/possible to do everything on-the-fly,
programatically.
Any input on this would be highly appreciated, specially if others have
more experience with Solr setups.
Thanks,
Eduard
----------
[1] http://markmail.org/message/kaxaka7lsbgo57ms
[2]
http://lucidworks.lucidimagination.com/display/lweug/Multilingual+Indexing+…
[3]
http://drupalcity.de/session/language-specific-and-multilingual-full-text-s…
[4] http://drupal.org/project/apachesolr_multilingual
[5]
http://stackoverflow.com/questions/4064880/create-new-core-directories-in-s…
[6]
http://info.basistech.com/blog/bid/171842/Indexing-Strategies-for-Multiling…
Hi devs,
For the Mobile App Investigation, Ludovic has been playing with a XWiki
Mobile App prototype done with jQuery Mobile (http://jquerymobile.com/).
This is a proposal for a mobile application that matches the jQuery Mobile
framework capabilities and that provides minimal XWiki functionality (like
listing wikis, spaces, pages, accessing content, viewing recent activity,
etc.)
http://incubator.myxwiki.org/xwiki/bin/view/Improvements/MobileApp
Thanks,
Caty
Hello XWiki experts,
has anyone started an OAI-PMH endpoint implementation?
It seems to me that I could do this in Groovy and Velocity... but maybe I should rather use an existing library.
If anyone has done a part of this, I'd be interest.
thanks in advance
Paul
Hi devs,
4.4 and 4.5 are the last 2 stabilization releases for the 4.x cycle. As such they are meant to be short releases (1 month per release) and the idea is to have:
- 4.4: December
- 4.5: January
This will allow us to start working on 5.0 at the beginning of February.
Thus for 4.4 (and 4.5) I propose to work on the following stabilizations (we shouldn't work on new features):
* AWM stabilization. Assignee: Marius
** Remove the i18n hack and use the new localization module to create a translation bundle for the application
** Add new field types (for page, image and attachment at least, with pickers)
** Improve the title and content fields (e.g. prevent dragging more than one title or content field)
* Extension Manager. Specifically we still need to able to install/upgrade a wiki farm in a few minutes. Assignee: Thomas/Marius
** XWIKI-8252: Migration from an older version will cause many merge conflicts with the Distribution Manager
** XWIKI-8443: When uninstalling a XAR extension a question should be asked for various conflict use cases
** Find a way to allow having each wiki admin doing upgrade instead of upgrading the whole farm by a farm admin which don't always know how to fix conflict like in myxwiki.org for example
** XWIKI-8173 (EM should not allow installing package exposing an installed feature)
* Translation module stabilizations/improvements. Assignee: Thomas
** XWIKI-8263 (Allow providing translations in a jar extension).
* SOLR improvements: we need to continue working on it and we can decide in the course of 4.4/4.5 if it's good enough to be made the default search or if we need to wait for 5.x to make it the default. Assignee: Edy
* Usability: small usability improvements. Assignee: Caty/JV. Caty/JV, could you please list what you'd like to work on?
* Workspace bug fixes (there are some raised by Anca for example). Assignee: Edy
* And a lot of bug fixes. Manuel reported a lot of browsers issue for IE that we need to fix
Anything else committers/contributors would like to work on for 4.4?
Dates
=====
4.4M1: 17 Dec
4.4RC1: 31 Dec
4.4Final: 7 January
Note that I'd have normally put RC1 on 24th but since that's the Christmas holidays, I've given RC1 2 weeks instead.
Can everyone review what I've put tentatively and tell me if it's ok? Also could you create the associated JIRA issues and reply to this email with them so that I can prepare the roadmap page on xwiki.org?
Thanks a lot
-Vincent
Hi devs,
Here are some notes I took while releasing XWiki 4.3 on how to improve the release process:
* We need to automate the generation of the CLIRR report. This is what takes the longest when releasing (not overall time but manual time required). IMO this can be done relatively easily by creating a patch for the CLIRR maven plugin itself to:
** Add support for wildcards in the new <difference> syntax
** Ensure that their report generation takes into account the <justification> element
** Possibly add a text report generation that we'll be able to copy paste in our RN
* Remove PURL generation for Tweets
** Tweets are ephemeral so no need to have permanent URLs
** Tweet clients already do URL shortening
(It's complex to use the PURL UI too)
* Don't create RN summary for OW2, email and even blog post. This will win some more time. Instead just link to the RN page which contains a summary and all details. For example for the email it could be:
"
The XWiki development team is proud to announce the availability of XWiki Enterprise <version>.
You can download it here: http://www.xwiki.org/xwiki/bin/view/Main/Download
Make sure to review the release notes:
http://www.xwiki.org/xwiki/bin/view/ReleaseNotes/ReleaseNotesXWiki<short version>
Thanks
-The XWiki dev team
"
* Put maven.xwiki.org:~/xwiki-release-scripts under Git and do a git reset before the ow2 step to be sure to have clean scripts
WDYT?
Thanks
-Vincent
Hi devs,
I've written some conventions for translation messages at
http://dev.xwiki.org/xwiki/bin/view/Community/L10N+Conventions
Please provide feedback, since we terribly lack some proper rules here,
and there aren't two applications that use the same conventions. Now
that we have support for modular translation documents, it's a good time
to clean up translations and move things out of ApplicationProperties
into each application.
--
Sergiu Dumitriu
http://purl.org/net/sergiu
The XWiki development team is proud to announce the availability of XWiki Enterprise 4.3.
This release brings several improvements in Workspaces, Extension Manager, Distribution Wizard, the REST API, new field pickers (Date, User and Groups), improved translation registration and new experimental Solr search.
You can download it here: http://www.xwiki.org/xwiki/bin/view/Main/Download
Make sure to review the release notes:
http://www.xwiki.org/xwiki/bin/view/ReleaseNotes/ReleaseNotesXWiki43
Thanks
-The XWiki dev team
Hello,
Annoying problem :
- One wiki on production with 1 space with an eventlistener working on it,
sending email as soon as a page is created in the space.
- In dev, the same wiki : we add a new space with an other eventlistener,
sending email as soon as a page is created in the space (in summary because
the mail, the conditions of sending it are differents, etc...)
Everything is OK !
So we've decided to get it on production. On pre-production the second
listener doesn't send emails. In logs we have this error :
com.xpn.xwiki.XWikiException: Error number 2 in 0: The wiki xxx does not
exist
xxx is the name of our wiki and declared as an alias of the wiki. We are in
version 2.6.2.
I don't see what we are missing and so we cannot get it on productio.
Thanks for your help.
Patrice
--
View this message in context: http://xwiki.475771.n2.nabble.com/com-xpn-xwiki-XWikiException-Error-number…
Sent from the XWiki- Dev mailing list archive at Nabble.com.