Hi Ed/Mac,
You wet our appetite with your plugin... Any progress?
Thanks
-Vincent
On Jan 6, 2007, at 1:54 AM, Ludovic Dubost wrote:
Hi,
This looks nice.. We are already using JTidy for the HTML -> XHTML
conversion before we export to PDF.
A good way to do this is indeed as an XWiki plugin.
I would suggest 2 plugins:
1/ Importer
It would be great if the importer have a generic part and a format
specific part. We could use the base code as the code to import from
other wiki formats.
There is one case which is complex in importing is when you want to
handle links. You might need to read multiple files to convert links
from one format to another. For example if you have a set of HTML
files
with relative links you might want to convert them to wiki links.
An importer framework which allows to handle this would be great.
2/ Validator/Cleaner
It would be great to be able to use each part separately and have
some
options when calling it.
Besides cross side scripting being able to allow/refuse tag lists
would
be great and also allow/refuse velocity and groovy
I guess Vincent would like to see the plugin outside of the core, but
I'll leave this to him.
Ludovic
Mac a écrit :
Hey there,
I was thinking about building a XWiki plugin that will import
(dynamically) another website, either whole or partial (A little XML
parsing of it), besides reading the one page Plugin Doc Is there any
thing else that I could use to help me speed up this development?
Like
XML parser/HtmlTidy or something that XWiki may
already uses?
Also,
While I am on this topic of html, since the Wiki will take
Html/xml in the normal editing of the page. I would be willing to
write a Validator that would strip out dangerious html (Cross site
scripting,...) I have already done this in a filter for a "Sudo
wiki"
I was building from scratch, but would be willing
to re-write it
in a
way that would be helpful to the project, if
someone could point
me in
the right direction where this type of Class
would fit in to the
project.
Thanks,
Cant wait to see beta 2