Hi devs,
Taking advantage of the less noisy period of the holidays, Flavius,
Marta and myself have developed a new wizard for editing color themes,
much more "WYSIWYG". It still needs work, but the current version is
already very usable, so it should go through an UI review.
To test it, just try to edit any color theme on the incubator:
http://incubator.myxwiki.org/xwiki/bin/view/ColorThemes/
Comments and suggestions welcome.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Hi devs,
There are many things that could be improved in the XWiki
Administration, but for the moment I would like to discuss the
Presentation part, which is one of those most likely to be accessed by
newbies, and should be easy to deal with.
Currently, the Presentation allows to configure the following topics:
- Header: title bar text and meta information
- Panels: whether to show panels on the right/left, and the list of
panels to display in each column
- Footer: copyright notice and version
- Skin: Skin document, Color theme, Default stylesheet, other stylesheets
Problems:
- IMO, some fields are not really presentation related: title, meta
information, copyright notice, version info
- It is not easy to list the panels you want without any help or suggestion
- The panel columns and panel list configuration feature is also
available in a friendlier form in the Panel Wizard section of the
administration, but there is no reference to it from the Presentation
section
- There is no suggestion about available skins, and the user is not
"warned" that customizing the skin actually means changing templates and css
- "Default stylesheet" and "Other stylesheets" mean nothing to someone
who didn't look in the skin directory; also, as I see it, they are only
useful for the Toucan skin (where there were several pre-defined
stylesheets for different colors), while in the Colibri skin -- and
probably the other skins that will be developed from now on -- we use
Color themes for changing the look.
Proposed changes:
- Move Header and Footer topics to the General section
- Keep 4 topics: Page layout, Panels, Color theme, Advanced skin
configuration, displayed in a horizontal tab bar (like the one in AllDocs)
[Page layout]
- Use something similar to the Page Layout tab form the wizard to
choose if the right/left panels are shown
[Panels]
- Continue to allow listing the panels in input fields, since for
some users it is faster and easier than playing with the panel wizard,
but attach an AJAX suggest to those input fields
- Display the Panel wizard (and remove the panel wizard section form
Administration); Note: the panel wizard will need some adjustments for
this to be possible.
[Color themes]
- Integrate the ColorTheme "application" (or soon to be application):
allow browsing, previewing and selecting available color themes, and
creating a new color theme
[Advanced skin configuration]
- Inform the user that he would need to write his own templates and
stylesheets, either in the provided textareas or in files attached to
the corresponding skin object
- Allow to browse, preview and select skins
There are many changes, and they will require much more than a couple of
days (there won't be time for them to show up in 2.2, for example), but
if we agree, they can be progressively integrated in future versions.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Hi everyone!
I'm starting on a re-implementation of the LDAP implementation for XWiki, an implementation that performs upward sync instead of downward sync (updating LDAP instead of updating XWiki). This is for a unified platform we (a friend and myself) are working on where XWiki is the "master app".
After reading through (some of) the sources for xwiki-core, it seems to me the entire LDAP infrastructure is located in 2 packages: com.xpn.xwiki.plugins.ldap.* and com.xpn.user.impl.LDAP.*
Before I begin, I'd like to confirm this so I can focus exclusively on the code I need to re-implement and not have any surprise LDAP dependencies creep up later on. So, am I correct that these are the packages that need to be rewritten for upward sync?
Tanks!
Hi devs,
I'm almost done with my entity reference refactoring and I've just
realized I have missed something I think. So far the implementation
only supports Absolute references (i.e the entity reference factory
always return a reference with all parts filled - you choose to use a
default factory or a current entity depending on how you wish to
resolve the names when they have not been provided in the passed
reference string).
I now think we must also support relative references (i.e. when some
parts can be null) and that it's up to the user of the api to decide
if they want to convert a relative reference to an absolute one or not.
Here's a use case: renaming of documents. For exemple documents have
links specified as a string representing the target doc name. If we
don't have relative references then we need to decide if we want to
use the default serializer (all parts printed including wiki name) or
the compact serializer (only parts different from context reference
printed). This doesn't support printing only what the user had decided
to fill. For ex a user might have specified voluntarily the space and
page name and right now with my implementation he'll get only the page
name specified if the new space is the same as the space for the
current doc.
So here's my proposal:
* Entity Reference Factory leaves parts to null when not specified in
the string representation.
* We add a EntityReference.getAbsoluteReference(EntityReference base)
method to return an absolute reference. It's resolved against the
passed base reference (i.e. parts not specified are taken from it)
WDYT?
I'm going to start refactoring my code to do this later today so
please let me know if you see any pb with it.
Thanks
-Vincent
Hi
For the development of the Groovy based Blog I just developed the code in IntelliJ, copied inside a browser and eventually exported the content into a XAR file. Slowly but surely this is getting way to much work especially when doing sweeping changes.
Because I don't use Eclipse I am not able to use the XEclipse tool but I was wondering if anybody knows a way to XML encode text (within Maven2) so that it later could use Ant's copy and filter tool to incorporate the developed code / content inside the XML file that will build up the XAR file.
Thanks - Andy
Hi devs,
We need to define a strategy for better handling translations. I've
had a call with Guillaume and Jean-Vincent and here's the process we'd
like to propose:
* One person is in charge of http://l10n.xwiki.org/. This means
monitoring the work there, coordinating validation of key values and
ensuring validated translations are incorporated in the source tree.
Guillaume is willing to take that role for now.
* The XE release manager has the responsibility of taking the
validated keys on l10n.xwiki.org and committing them during the
Milestone 2 dev (before the RC1).
* The l10n manager should ping the release manager whenever there are
translated and validated keys ready to be incorporated or if there
have been important changes to be included in the release after M2 has
been released.
* The l10n manager should test XE and the applications after the keys
have been applied to ensure quality. Basically the l10n manager is
responsible for the quality of translations in general.
Here's my +1
Thanks
-Vincent
On 12/31/2009 08:39 AM, asiri (SVN) wrote:
> Author: asiri
> Date: 2009-12-31 08:39:16 +0100 (Thu, 31 Dec 2009)
> New Revision: 25984
>
> Modified:
> contrib/sandbox/xwiki-officepreview/pom.xml
> contrib/sandbox/xwiki-officepreview/src/main/java/org/xwiki/officepreview/OfficePreviewVelocityBridge.java
> contrib/sandbox/xwiki-officepreview/src/main/java/org/xwiki/officepreview/internal/OfficePreviewVelocityContextInitializer.java
> Log:
> [misc] Changing all platform dependencies to 2.1-SNAPSHOT version.
>
> * This will make it possible to use xwiki-officepreview with XE 2.1.x versions.
>
> * Downside is that xwiki-officepreview will not be able to preview Office 2007 documents (support for which was added in xwiki-officeimporter 2.2M1).
Why make it compatible with 2.1.x, when it comes with the price of
reduced functionality? From what I see in the commit, you switched to
older APIs that are deprecated in 2.2, which means that a new module
will be released with deprecated code already, and at some later time it
will be harder to migrate, once more code is written.
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
LinkedIn
------------
Paul-Iosif Guralivu requested to add you as a connection on LinkedIn:
------------------------------------------
Guillaume,
I'd like to add you to my professional network on LinkedIn.
- Paul-Iosif Guralivu
Accept invitation from Paul-Iosif Guralivu
http://www.linkedin.com/e/9s_c-0fN05p6qjQq9meQgfVB/blk/I1690300468_2/pmpxnS…
View invitation from Paul-Iosif Guralivu
http://www.linkedin.com/e/9s_c-0fN05p6qjQq9meQgfVB/blk/I1690300468_2/39ve3o…
------------------------------------------
DID YOU KNOW LinkedIn can help you find the right service providers using recommendations from your trusted network? Using LinkedIn Services, you can take the risky guesswork out of selecting service providers by reading the recommendations of credible, trustworthy members of your network.
http://www.linkedin.com/e/svp/inv-25/
------
(c) 2009, LinkedIn Corporation
Hi Devs & Users,
With the new refactoring of officeimporter module, it's possible to
implement a generic office document converter on top of
xwiki-officeimporter. By a "document converter" I meant an xwiki application
where you can upload a .doc file and get it converted to a .pdf, .odt etc.
Supported formats will be those mentioned in
http://artofsolving.com/opensource/jodconverter/guide/supportedformats and
few more Office2007 formats.
Would this application be a good addition to XWiki?
Thanks.
- Asiri
PS: It would take about 2-3 days to fully implement and test the
application.
They are not using XWiki... yet!
Cheers,
Ricardo
-------- Original Message --------
Subject: [Obo-discuss] CFP: The Future of the Web for Collaborative
Science (FWCS 2010) at WWW'10
Date: Fri, 11 Dec 2009 16:10:54 +0000
From: Jun Zhao <jun.zhao(a)zoo.ox.ac.uk>
Reply-To: obo-discuss(a)lists.sourceforge.net
To: undisclosed-recipients:;
[apologies for cross-posting]
=================================================================================================
CALL FOR PAPERS - International Workshop on The Future of the Web for
Collaborative Science 2010
=================================================================================================
The First International Workshop on The Future of the Web for
Collaborative Science (http://esw.w3.org/topic/HCLS/WWW2010/Workshop),
co-located with WWW'10, April 27 or 28 2010, Raleigh, NC, USA
---------------------------------------------------
INTRODUCTION
The Web was originally invented with the physics community in mind, but
rapidly expanded to include other scientific disciplines, in particular
the health care and life sciences. By the mid 1990s the Web was already
being used to share data by biomedical professionals and
bioinformaticians. The Web continues to be immensely important to these
fields, however use cases have expanded considerably. Researchers are
now looking to share extremely large data sets on the Web, extract
insights from vast numbers of papers cross sub-disciplines, and use
social networking tools to aggregate data and engage in scientific
discussion. Furthermore, individuals are beginning to store their
medical records online, and some are sharing their genetic makeup in a
bid to find others with a similar profile. These use cases are pushing
the boundaries of what is currently possible with the Web. This half-day
workshop will present how scientists are currently using the Web, and
discuss the functionality that is required to make the Web an ideal
platform for both cutting edge scientific collaboration and for managing
health care related data.
The goals of this workshop are the following:
* Foster innovation in applying the latest web technologies to
collaborative HCLS
* Explore HCLS specific requirements for collaborating on the web, e.g.
trust, privacy, intellectual property, knowledge management, and the
scale and diversity of data
* Learn about the latest developments in data modeling, tools and
technologies for web-based collaborative science
* Bridge communication and knowledge transfer between the HCLS and web
communities
---------------------------------------------------
TOPICS FOR PAPER SUBMISSION
We would encourage submission of papers covering the following topics:
* Web 2.0 applications for large, heterogeneous and complex data sets
* Models for collaborative scientific annotations
* Tools and applications for aggregating information across web sites
* Provenance, attribution, trust, and intellectual property
* Policy for data access, sharing, and anonymization
We seek three kinds of submissions:
* Full technical papers: up to 10 pages
* Short technical and position papers: up to 5 pages
* Demo description: up to 2 pages
---------------------------------------------------
SUBMISSIONS
Submitted papers will be refereed by at least three members the Program
Committee. Accepted papers will be published on the workshop web site.
All submissions must be formatted using the WWW2010 templates
(http://www2010.org/www/authors/submissions/formatting-guidelines/). The
address for the online submission system will be published shortly.
---------------------------------------------------
IMPORTANT DATES:
* Submission deadline- February 15, 2010
* Notification of acceptance - March 8, 2010
* Camera-ready version - March 22, 2010
* Workshop date - April 27 or 28, 2010
---------------------------------------------------
Workshop Chairs
Jun Zhao, Oxford University
Kei Cheung, Yale University
M. Scott Marshall, Leiden University Medical Center / University of
Amsterdam
Eric Prud'hommeaux, W3C
Susie Stephens, Johnson & Johnson Pharmaceutical Research & Development
---------------------------------------------------
Programme Committee
* Christopher Baker, University of New Brunswick
* John Breslin, GUI Galway
* Simon Buckingham Shum, Open University
* Annamaria Carusi, Oxford University
* Helen Chen, Agfa Healthcare
* Paolo Ciccarese, Harvard University
* Tim Clark, Harvard Medical School
* Anita de Waard, Elsevier
* Michel Dumontier, Carleton University
* Lee Feigenbaum, Cambridge Semantics
* Timo Hannay, Nature
* William Hayes, BiogenIdec
* Ivan Herman, W3C
* Vipul Kashyap, Cigna
* Nikesh Kotecha, Stanford University
* Phil Lord, University of Newcastle
* Robin McEntire, Merck
* Parsa Mirhaji, University of Texas
* Mark Musen, Stanford University
* Vit Novacek, DERI
* Alex Passant, DERI
* Elgar Pichler, AstraZeneca
* Rosalind Reid, Harvard University
* Patrick Ruch, University of Applied Sciences Geneva
* Daniel Rubin, Stanford
* Matthias Samwald, DERI, Ireland // Konrad Lorenz Institute for
Evolution and Cognition Research, Austria
* Susanna Sansone, EBI
* Nigam Shah, Stanford University
* Amit Sheth, Wright State University
------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev
_______________________________________________
Obo-discuss mailing list
Obo-discuss(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/obo-discuss
Hi Sergiu,
> /**
> - * @return can current user restore this document from recycle bin
> + * Check if the current user has the right to restore the document.
> + *
> + * @return {@code true} if the current user can restore this document,
> {@code false} otherwise
> * @throws XWikiException if any error
> */
> - public boolean canUndelete() throws XWikiException
> + public boolean canUndelete()
> {
>
This looks like a public API change. You have introduced a checkstyle error
(unused @throws tag) which I fixed since the build was failing. Not sure if
the change of the API is a big deal or not in this case.
- Asiri
Hi devs,
The short version:
Should we always use UTF-8 for encoding and decoding URLs, regardless of
the wiki encoding, for better compliance with web standards?
The long version:
By definition, URLs can only contain ASCII characters, everything else
must be converted to their corresponding bytes and escaped as %XY
escapes. The problem is that "their corresponding bytes" implies a
charset + encoding, and no specification *enforces* a specific pair,
although it is *recommended* to use Unicode + UTF8, to comply with the
modern tendency of the web in general.
Traditionally, XWiki has been using the same encoding as the configured
global wiki encoding for the URLs, which means that before 1.9, when we
switched to UTF8 as the default wiki encoding, all URLs were using the
ISO-8859-1 encoding. Since the switch to UTF-8, URLs are also using the
UTF-8 encoding by default, although the wiki encoding can be changed.
Now, since 2.1, a bugfix accidentally changed the behavior, so that
parsing back URLs always uses the UTF-8 encoding, even though composing
URLs continues to use the wiki encoding. This is a bug, which prevents
changing the encoding to anything other than UTF-8, and it should be fixed.
Now, we have two options:
1. Reintroduce the old behavior, so that URLs always use the wiki
encoding. This is a direct bugfix.
2. Also change the encoding part, so that UTF-8 is always used. This is
an improvement, going towards better compliance with web standards.
Personally I think that the second option is the better one, but it
requires a vote, since it has a few drawbacks.
Advantages:
+ better compliance with web standards, since UTF-8 is the recommended
encoding for URLs (although not imposed)
+ support for a wider range of document names, since UTF-8 allows
full-unicode document names, while ISO-8859-1 limits names to latin1
characters
+ better support from browsers, since entering accented characters
directly in the address bar encodes the URL sent to the server using
UTF-8, and decoding the URL also assumes UTF-8; this means that a
document named "é" will be printed as .../view/Main/%E9 and will have to
be entered the same way in the address bar when ISO-8859-1 is used, and
as .../view/Main/é when UTF-8 is used
Drawbacks:
- by default Tomcat uses ISO-8859-1 as the encoding for URLs, so the
Tomcat configuration will have to be changed as in
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Encoding#HTomcat
- some existing bookmarks will not work anymore once the encoding is changed
+1 for option 2 from me,
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
Hi,
Since I'm writing the new Model part for Entity References (document
and attachment for now but we can imagine objects and object
properties later on). I'd like to propose 2 things:
* A syntax for escaping special characters in references
* Some changes to the supported reference syntax
Escapes
=======
I'd like to propose using the backslash (i.e. \ ) character.
For example: "a page name with \: some \. special \@ characters"
Rationale:
* it's a well known char for escapes, all devs know about it
* using "~" would be confusing with the wiki syntax
Known issue:
* when in the velocity macro you need to be careful to use a double
escape since \ is the velocity char for escaping. Ex:
{{velocity}}
[[label>>special\\@page-name]]
{{/velocity}}
Breakages
=========
Since we'll know have a generic factory/serializer for all entity
types we need to make the syntax more consistent. This means that the
following syntax will not work anymore:
* ex: "wiki:page". This would be interpreted as a document with a page
of "page" and a space of "wiki:"
* When using the "default" factory, only default values would be used
(right now it's a mix between current doc values and default values).
Suggested defaults: "xwiki" for Wiki, "XWiki" for space, "WebHome" for
page and "" for attachment name. Note that one idea is to make these
defaults configurable in the xwiki config file
* It's hard to know for sure but we certainly have various other
inconsistencies that exist now when using special reserved chars in
references
We have 2 options here:
* Make XE 2.2 not backward compatible for some references. Advertise
it in the release notes and explain to users how they should change
their names if they use "exotic" names
* Create an automatic converter, for example as a database migrator
that would read all documents in the wiki, call getLinks() on each
document, send the links to the old parser (would need to extract it
somewhere and ensure it behaves as now) and send the link to the new
parser and compare. If there's a difference, escape the char and save.
This would also need to be done for document parent references, the
backlink table and all object properties that allow wiki syntax or
velocity. Note that it wouldn't fix any generated name (using velocity
for ex).
The automatic converter option is really hard to do so I'm leaning
more towards the first solution. That would need to be properly
handled since it could potentially cause quite a few broken links.
WDYT?
Thanks
-Vincent
Hi everyone,
Just a quick note to wish everyone in the XWiki project very happy
festivities and a happy next year for 2010.
The XWiki project has seen exciting times in 2009 as described on:
http://massol.myxwiki.org/xwiki/bin/view/Blog/XWikiIn2009
Let's all make 2010 even better :)
Take care,
-Vincent
In current implementation, for a successful upgrade, I have to carefully export:
1. All the pages that I've created
2. All the user profile pages.
3. *.WebPreferences, XWiki.XWikiPreferences.
4. XWiki.XWikiAllGroup, XWiki.AdminGroup
and pray, these pages are compatible with a new version of XE.
As I see, the main headache is that the information about user, group,
preferences spreads in a lot of pages. Why isn't there a central
database to store them, so updating will be much simpler?
--
-- Zhaolin Feng
-- www.mapbar.com
-- Currahee! We stand alone together!
2.1.1 of course :)
JV.
On Mon, Dec 21, 2009 at 3:23 PM, Jean-Vincent Drean <jv(a)xwiki.com> wrote:
> Hi,
>
> I'd like to release XE and XEM 2.1.1 tomorrow. 4 important bugs have
> been fixed since 2.1:
>
> - XWIKI-4575 : Horizontal ruler breaks the display of the WYSIWYG
> menu bar and toolbox
> - XWIKI-4681 : Attachments deletion has no effect before a restart
> - XWIKI-4679 : Can't select the macro after inserting it (WYSIWYG)
> - XWIKI-4688 : Macro disappears after editing its properties (WYSIWYG)
>
> Here's my +1.
>
> Thanks,
> JV.
>
Hi,
I'd like to release XE and XEM 2.1.1 tomorrow. 4 important bugs have
been fixed since 2.1:
- XWIKI-4575 : Horizontal ruler breaks the display of the WYSIWYG
menu bar and toolbox
- XWIKI-4681 : Attachments deletion has no effect before a restart
- XWIKI-4679 : Can't select the macro after inserting it (WYSIWYG)
- XWIKI-4688 : Macro disappears after editing its properties (WYSIWYG)
Here's my +1.
Thanks,
JV.
Hi devs,
I'm still working on the Model Reference domain. We've brainstormed
with Thomas and we'd like to propose replacing the current
ModelContext.getCurrentDocumentName() by
ModelContext.getCurrentEntityReference() (which returns an
EntityReference).
The idea is that a URL could target a document but also a wiki only
(e.g the REST API odes that), or a given space only, or even an object
or a property. This would mean we would need to have
getCurrentDocumentReference() in addition to all the others:
getCurrentWikiReference(), getCurrentSpaceReference(). It would also
mean a lot of them would be set to null. Last it would mean different
ways to access the same information (e.g.
getCurrentDocumentReference.getWikiReference() vs
getCurrentWikiReference()).
We would also add a EntityReference.extractReference(EntityType type)
method in order to make it easy to extract information from the a
reference path.
For example to extract the Wiki from an entity reference:
WikiReference wikiRef =
context.getCurrentEntityReference(EntityType.WIKI);
if (wikiRef != null) ....
WDYT?
Thanks
-Vincent
Hi devs,
Currently the getDocument method always goes to the storage to retrieve
the document, even if the same document has just been retrieved. This
means that the following code will not work:
#set($d = $xwiki.getDocument('X'))
$d.setTitle('the title')
$xwiki.getDocument('X').getTitle() # will not print 'the title'
I'd like to change getDocument so that it first searches in a map of
used documents in the current context. This means the following:
- getDocument searches in XWikiContext.usedDocuments (or better, in the
ExecutionContext)
- if found, return the value from there
- if not, go to the storage, return it to the caller
- when the document is changed for the first time, i.e. when
api.Document.getDoc() is called, clone the original document and put it
in usedDocuments
- as a special case, PreviewAction also puts the updated context
document in usedDocuments
This means that consecutive calls for retrieving a (changed) document
will always return the same object. This prevents possible preview bugs,
like http://jira.xwiki.org/jira/browse/XABLOG-14 or
http://jira.xwiki.org/jira/browse/XWIKI-4689
Yet this is an important behavior change. Do you think anybody is using
this "feature", and actually expects the above code example to work as
it does now?
Also, we must be careful with the performance, since this new map could
get big, holding all the documents in the database. Perhaps a LRU
fixed-size map would be better, although this breaks the uniqueness
guarantee.
So, WDYT?
1. Should we introduce this cache?
2. Should it be limited in size?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/