Sounds like some good points in favor of not reusing the rendering
module. A pity since that'll cost us a lot of development time and
maintenance but I agree with your findings.
OTOH is it possible to build some solution that will be reusable for
the wiki editor for example? The wiki editor executes in the browser
and thus for autocompletion and syntax highlighting you would need to
do ajax request to the server if we wanted to reuse something. Does
that work or do we need to reimplement the logic in javascript (i.e.
reuse some existing javascript fwk for autocompletion/syntax
highlighting)?
Thanks
-Vincent
On Jul 1, 2008, at 9:00 PM, Fabio Mancinelli wrote:
On 1 juil. 08, at 11:12, Vincent Massol wrote:
Just thought about a potential use case for
storing the offset: error
reporting.
We'll also need it if we want to do auto completion in the wiki
editor
which I think would be nice to have.
Ok I'll try to add offset/length to the rendering module. It's going
to cost us some performance actually since when there are block
transformations you need to recompute the blocks offsets... I still
need to know how it's best to store them for your needs Malaka.
Storing them in the Blocks themselves might cost too much to retrieve
them. I could also store them in some other structure with references
to the blocks. Let me know what you need.
To speed up things I've done a bit of investigations (Malaka, please
integrate with your own findings).
I've looked at how these things are done in other complex projects,
and maybe I think it will not be possible to use the actual rendering
engine's parser as the basis for the Eclipse infrastructure.
The Eclipse JDT uses a FastJavaPartitionScanner
(
http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.jdt.ui/ui/org/eclipse/…
) that basically implements an ad-hoc scanner for isolating blocks
(there is plenty of while(true), etc., i.e. clearly it isn't a
standard Java parser)
The Groovy editor uses the standard plain RuleBasedPartitionScanner
(
http://svn.codehaus.org/groovy/trunk/groovy/ide/groovy-eclipse/org.codehaus…
) suitably customized for its own needs, with a lot of ad-hoc rule-
based scanners (e.g.,
http://svn.codehaus.org/groovy/trunk/groovy/ide/groovy-eclipse/org.codehaus…)
.
Apparently the parsing of the file with the real parser is done only
in the generation of the content outline (and in the error reporting).
This is noticeable in the Java IDE: the content outline is updated
only when the file is saved (i.e., compiled and so parsed with the
actual Java parser). The same is also true in the Groovy code and in
the XMLEditor example where the SAXParser is called in the
OutlineContentProvider when the document is saved.
Why this architecture is necessary, imho, is because the editor's
parser should be as loose as possible and resilient to (transient)
errors. In fact, when the user is typing the document is almost always
in an inconsistent state (which makes the real parser fail). Examining
the document while editing with the actual parser also means to parse
the document at every keystroke: that's what happens in practice: the
partitioner is asked to return next tokens starting from arbitrary
positions of the text: a request that is not very well fulfilled by
the real parser.
From these considerations, I think that probably using the rendering
engine's parser is not feasible, and we will need to backtrack and
build a "parallel" infrastructure for parsing XWiki documents.
Infrastructure that has already been put in place by Venkatesh and
that Malaka, at this point, will finish and improve for supporting all
the necessary features and variants.
Malaka could you comment on this?
And in general, WDYT?
-Fabio
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs