Hi,
We have the need to isolate groups of components. For ex a wiki macro
created in a subwiki should only be visible in that subwiki by default.
Here's an implementation proposal that I'm planning to implement:
* There's a Root Component Manager (the current CM)
* There are 3 components which implement the ComponentManager role and
with 3 hints: "wiki", "user" and "all". There's a
CompositeComponentManager class that allows chaining CM and the "all"
CM chains the "default" (root CM), "wiki" CM and "user" CM. This works
the same as with the configuration module.
* Other components can have CMs injected as they want (if not
specified then it's the default, etc). For ex:
@Requirement("all")
private ComponentManager cm
* Creation process. As for now the user creates the root CM and then
the annotation loader will create the descriptors for the other CMs
and register them against the root CM. They'll get instantiated once
(singleton) the first time they're looked up.
* In order to register a component for, say, a given "enterprise"
wiki, we need to add a new property to the ComponentDescriptor: get/
setAdditionalData(Object data). For example:
wikiCM.registerComponent(CD mycd) where
cd.setAdditionalData("enterprise").
* Last, Guice uses Modules to isolate component definitions so it
should be possible and relatively easy to port the implementation to
Guice (even though Guice uses static Modules we can make them dynamic).
WDYT?
Thanks
-Vincent
fyi
-Vincent
Begin forwarded message:
> From: "Uwe Schindler" <uschindler(a)apache.org>
> Date: November 26, 2009 10:53:40 AM CEST
> To: <announce(a)apache.org>, <java-user(a)lucene.apache.org>, <java-dev(a)lucene.apache.org
> >
> Subject: [ANNOUNCE] Apache Lucene Java 3.0.0 released
>
> Hello Lucene users,
>
> On behalf of the Lucene dev community (a growing community far
> larger than
> just the committers) I would like to announce the release of Lucene
> Java
> 3.0.0:
>
> The new version is mostly a cleanup release without any new
> features. All
> deprecations targeted to be removed in version 3.0 were removed. If
> you are
> upgrading from version 2.9.1 of Lucene, you have to fix all
> deprecation
> warnings in your code base to be able to recompile against this
> version.
>
> This is the first Lucene release with Java 5 as a minimum
> requirement. The
> API was cleaned up to make use of Java 5's generics, varargs, enums,
> and
> autoboxing. New users of Lucene are advised to use this version for
> new
> developments, because it has a clean, type safe new API. Upgrading
> users can
> now remove unnecessary casts and add generics to their code, too. If
> you
> have not upgraded your installation to Java 5, please read the file
> JRE_VERSION_MIGRATION.txt (please note that this is not related only
> to this
> version of Lucene, it will also happen with any previous release
> when you
> upgrade your Java environment).
>
> Lucene 3.0.0 has some changes regarding compressed fields: 2.9.0
> already
> deprecated compressed fields; support for them was removed now.
> Lucene 3.0.0
> is still able to read indexes with compressed fields, but as soon as
> merges
> occur or the index is optimized, all compressed fields are
> decompressed and
> converted to Field.Store.YES. Because of this, indexes with compressed
> fields can suddenly get larger.
>
> While we generally try and maintain full backwards compatibility
> between
> major versions, Lucene 3.0.0 has some minor breaks, mostly related to
> deprecation removal, pointed out in the 'Changes in backwards
> compatibility
> policy' section of CHANGES.txt. Notable are:
>
> - IndexReader.open(Directory) now opens in read-only mode per
> default (this
> method was deprecated because of that in 2.9.X). The same occurs to
> IndexSearcher.
>
> - Already started in 2.9, core TokenStreams are now made final to
> enforce
> the decorator pattern.
>
> - If you interrupt an IndexWriter merge thread, IndexWriter now
> throws an
> unchecked ThreadInterruptedException that extends RuntimeException and
> clears the interrupt status.
>
>
> See core changes at
> http://lucene.apache.org/java/3_0_0/changes/Changes.html
> and contrib changes at
> http://lucene.apache.org/java/3_0_0/changes/Contrib-Changes.html
>
> Binary and source distributions are available at
> http://www.apache.org/dyn/closer.cgi/lucene/java/
>
> Lucene artifacts are also available in the Maven2 repository at
> http://repo1.maven.org/maven2/org/apache/lucene/
>
>
> -----
> Uwe Schindler
> uschindler(a)apache.org
> Apache Lucene Java Committer
> Bremen, Germany
> http://lucene.apache.org/java/docs/
>
>
Hi,
As you knows, Import/Export operation on large XAR file cause problem.
We felt on an even worse situation, where a single XWiki document is
not properly exported due to heap exhaustion during the built of its
XML DOM or due to large attachments.
Having a look at the source, I have notice that many optimizations on
the way the export is produce could be quite easily introduced.
Currently, XWiki stored the exported document several times in memory
during the operation, which impact performance negativly and is
uselessly heavy on memory usage, even for reasonable documents.
Therefore, I have started a large patch to avoid these caveats, here
is the strategies I have followed:
1) The current implementation mostly build a DOM in memory for
immediately serialized it into a stream. So I have remove the
intermediate DOM and provide direct streaming of Element content by:
1.1) extending org.dom4j.XMLWriter to allow direct streaming
of Element content into the output stream, as is, or Base64 encoded.
Accessorily, my extension also ensure proper pairing of open/close tag.
1.2) writing a minimal DOMXMLWriter which extends my XMLWriter and
could be used with the same toXML() code to build a DOMDocument to
provide the toXMLDocument() methods to support the older
implementation unchanged if ever needed.
1.3) using the above, minimal change to the current XML code was
required
1.3.1) replacing element.add(Element) by either writer.writeElement)
or writer.writeOpen(Element)
1.3.2) for large content, use my extensions, either
writer.write(Element, InputStream) or writer.writeBase64(Element,
InputStream) which use the InputStream for the element content
2) The current implementation for binary data such as attachments and
export zip file is mostly based on in memory byte[] passing from
function to function while these data initially came from a
request.getInputStream() or are written to a response.getOutputStream
(). So I have change these to passover the stream instead of the data:
2.1) using IOUtils.copy when required
2.2) using org.apache.commons.codec.binary.Base64OutputStream
for base64 encoding when required
2.3) using an extension of ZipInputStream to cope with
unexpected close()
2.4) avoid buffer duplication in favor of stream filters
3) Since most oftently used large data came from the database through
an attachment content, it would be nice to have these attachment
streamed from the database when they are too large. However, I feel
that it is still too early to convert our binary into a blob, mainly
because HSQLDB and MySQL still does not really support blob, just an
emulation. These are also used to be cached in the document cache, and
this will require improvement to support blob. However I propose to
take the occasion to go in the direction of the blob by:
3.1) deprecating setContent(byte[]) and Byte[] getContent() in favor
of newly created setContent(InputStream, int), InputStream
getContentInputStream() and getSize()
3.2) Begin to use these new function as much as possible as 2) implied
3.3) this also open the ability to store attachment in another
repository that support better the streaming aspect (ie: a filesystem)
I am currently testing the above changes onto our client project, and
I expect to provide a patch really soon. It will require an upgrade of
org.apache.commons.codec to version 1.4, to have access to
Base64OutputStream.
I feel it will be a first step into the right direction, further
improvement should be to:
- import XML using a SAXParser without building a DOM in memory
- manage JRCS Archive better, the way they are built and store raise
the same issue then attachment
- manage the recycle bin better for the same reason
- improve caching to avoid caching very large stuffs
WDYT ?
Denis Gervalle
--
SOFTEC sa
http://www.softec.st
Hello All,
I have this use case that when i rename a page it should rename the other pages with same name in the same space.
ex., there are 3 pages with names "A" ,"B" and "A has / of B" in a space.
when i rename "A" to say "renamedA" we need to change the other pages to" renamedA has / of B" and the same for renaming a page with name "B".
I tried to catch the events when i do a rename action on the page , and there are two events fired DocumentSaveEvent and DocumentDeleteEvent and tried to apply my logic in my component to rename the pages with name in the same space but it failed .
i checked the implementation of the rename method and there the object name is not changed for the renamed document because of which when i try hql query with doc.name=obj.name it fails.
is there any way through which i can solve my use case?
Regards
Durga
Hi,
I'd like to apply Caleb's patch for http://jira.xwiki.org/jira/browse/XWIKI-4410
. This issue is a prereq to fix "AllDocs's attachment tab requires
programming right to work" (http://jira.xwiki.org/jira/browse/XE-521).
This means introducing the following new APIs:
- public List<Attachment> searchAttachments(String
parametrizedSqlClause, int nb, int start, List< ? > parameterValues)
throws XWikiException
- public List<Attachment> searchAttachments(String whereSql, int nb,
int start) throws XWikiException
- public List<Attachment> searchAttachments(String whereSql) throws
XWikiException
- public int countAttachments(String parametrizedSqlClause, List< ? >
parameterValues) throws XWikiException
- public int countAttachments(String whereSql) throws XWikiException
Here's my +1 (even though it keeps making the class larger but I
don't have any other idea for now without a huge refactoring)
Thanks
-Vincent
Hi,
We have the need to handle optional Transformations (for Annotations
and more generally for user-introduced annotations).
Here's what Thomas and I are proposing:
1) We remove the TransformationManager component from the Rendering
module (public API). This means that calling code must lookup
Transformations directly.
2) We modify the Converter interface in the Rendering module in 2 ways:
- add a new signature that takes a list of Transformation as parameter
- modify the implementation for the signature that take no
Transformation params so that no Transformations are executed when
it's used (API breakage)
3) We add a XWiki configuration parameter in xwiki.properties to list
the transformations that must be executed (it's a list of component
hints) when a document is rendered. If this config param is not
defined then the default value will contain the Macro Transformation.
4) We introduce a new module called xwiki-presentation which will
contain code to handle XWiki Presentation stuff. For example:
- template handling
- displayers (document displayer, object displayers, etc)
- display configuration
- more to be defined but related to presentation
The idea would be to move stuff that is currently in XWiki/
XWikiDocument related to presentation there (for example
XWikiDocument.getRenderedContent could be replaced by
DocumentDisplayer.display(DocumentName, Syntax, Writer output) - to be
defined later).
5) In order to allow modules to not depend on xwiki-core we introduce
PresentationConfiguration in the new xwiki-presentation module defined
in 4) with a getViewTransformations() method corresponding to a
"presentation.viewTransformations" configuration parameter.
6) We introduce a new TransformationContext class similar to
MacroTransformationContext and modify the Transformation API to:
void transform(XDOM, TransformationContext) (instead of XDOM dom,
Syntax syntax)).
TransformationContext would contain 2 type of data:
- the syntax
- the list of transformations being executed (this is required by some
Macros. For ex the HTML macro needs it and other may need it too)
7) We remove the getPriority() method from the Transformation interface
8) We modify calling code: WYSIWYG, XMLRPC, etc to use the new
Converter API and to lookup PresentationConfiguration to get access to
the list of view transformations to be executed.
Here's my +1
Thanks
-Vincent
PS: A long and complex mail... sorry about that...
Hi devs,
I would like to reorg a little rendering submodule on svn/maven to have:
xwiki-rendering
- xwiki-rendering-syntaxes
-- xwiki-rendering-syntax-wikimodel
-- xwiki-rendering-syntax-xml
instead of the current
xwiki-rendering
- xwiki-rendering-parsers
-- xwiki-rendering-parser-wikimodel
-- xwiki-rendering-parser-xml
- xwiki-rendering-renderers
-- xwiki-rendering-renderer-wikimodel
-- xwiki-rendering-renderer-xml
Because parsers and renderer for a same syntax usually share
informations and are synchronized.
Here is my +1
--
Thomas Mortagne
Hi devs,
I want to add two new methods to the xml-rpc api:
- String getRenderedContent(String token, String pageId, String syntaxId);
- String getRenderedContent(String token, String pageId, String content,
String syntaxId);
Currently we are only able to get xhtml/1.0 output by using getPage and
accesing page.Content from the methods output. We need this new methods
to get the output in different syntaxes(Eg: annotatedxhtml). This is
useful to parse the output for macros, images and links like we do in
the wysiwyg.
I'd like the code to go in the 2.0.4 release. The old method invocation
required different method names. Since we don't have the xml-rpc cleanup
in 2.0.4 we probably need to have different naming for the methods
above. Since starting with 2.1 we use the Apache Xml-Rpc method lookup
and invocation, we might be able to use the same name and different
signature in the future. I'll need to check this.
WDYT?
Thanks,
Florin Ciubotaru