Hi!
Kindly ask you to solve some unclear topics:
I. How can I find information about such dependencies:
- How many server RAM memory is required for each 1GB of attachments?
- The same for CPU.
How to estimate this and calculate hardware? What are main principles?
II. Is it possible to customise WYSIWYG editor separetly for each space in one sub-XWiki ?
III. Is there any way to manage anchors from Links plugin in WYSIWYG editor?
The logic is:
- select space
- select page
- select anchor on this page
- put the link
For now, even if I write down XWiki.WebHome#anchor in the link field manually, I get #anchor cut out.
And the only way to do it via source code editor manually. Then it works fine. Personally me found more or less suitable solution with FF plugin https://addons.mozilla.org/ru/firefox/addon/416/
It's very easy to get anchor, but not so easy to put it. For unqualified users it makes XWiki "one-handed".
IV. Is there any way to make TOC macro to build table of contents of several pages and put it on one page?
For Example:
toc Page1, Page2, Page3 ....
It's very useful, when one can group all project highligts together in one TOC.
I used to use Track Wiki, it works excellent in there. I suffer from it's absence now :-)
Thank you
Dmitry Bakbardin
Hi,
I have an existing XWiki enterprise 1.3.1 running with a mysql backend
and storage version 7351. I would like to upgrade to XWiki enterprise
2.6, but I am running into problems.
- a xar export eats all ram, and the max heap is already the maximum
of my 32-bit linux (-Xmx2600m) . When I increase it the jvm fails to
start. The backup.xar stops at around 1.1GB, and then I gets
OutOfMemory execeptions all over the place.
- when I install 2.6 using the same mysql backend, the storage
migration throws a massive amoutn of errors, mostly due to "Exception
while saving object XWiki.XWikiUsers". This is migration
R15428XWIKI2977.
Is there a way to perform an offline migration, so I can debug where
needed, ot a way to perform an export and import in a clean database
offline?
Regards,
Leen
I see in the administration documentation:
Encrypt cookies using IP address
Even if the password cannot be extracted from the cookie, the cookies might
be stolen See: XSS and used as they are.
By setting the xwiki.cfg parameter xwiki.authentication.useip to true you
can block the cookies from being used except by the same ip address which
got them.
But when I look in xwiki.cfg, there is no mention of useip. Is this option
still recommended for use?
thanks
Paul
Hi all,
I see in the "Access Rights" documentation that there are 3 ways it can be
set up:
Open Wiki
Public Wiki
Public Wiki with confirmed registration
All of those options allow the user to register without forcing the admin to
confirm the registration.
I don't want users to be able to register themselves. I have a small set of
special users and I want to be able to register them manually, or at least
have to confirm their registration before an account is created for them.
any normal visitor should not be able to modify anything on the website, and
that includes registering themselves.
is this possible?
thanks
Paul
Hi
Our XWIKI is multi-language.
When we translate a document into other languages how are these documents stored?
The issue I have is that when we update the original it is not easy to remove the incorrect translations
How do I remove only a translation without removing the default language (or all the translations at once, without the original)?
Gerritjan
Hi,
today I've noticed that something bad had happen to some of the attachments in my XWiki, here is a
screenshot from one of the affected pages:
http://i.imgur.com/p6Xs7.png
Take a look, a couple of attachments have been uploaded but only one is displayed in the attachment tab.
Person who uploaded them claims that yesterday they were ok, but today somehow they disappeared.
It's weird that there is no trace of any operation on them after the uploading phase.
I'm using XWiki Enterprise 2.5.32127 with MySQL data base (Server version 5.1.47).
To add more context, last days my users started to add more attachements to their pages. Currently the
database after the dump is around 200 MB large.
Also looked at the logs and found several interesting fragments ( all of the log snippets are from the time
this have been noticed ):
2010-11-18 09:03:09,355
[http://apps.man.poznan.pl:28181/xwiki/bin/download/Documents/Proposals/2009…]
ERROR web.XWikiAction - Connection aborted
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
2010-11-18 13:23:53,118 [http://localhost:28181/xwiki/bin/view/Projects/Opinion+Mining] WARN
xwiki.MyPersistentLoginManager - Login cookie validation hash mismatch! Cookies have been tampered with
2010-11-18 13:23:53,119 [http://localhost:28181/xwiki/bin/view/Projects/Opinion+Mining] WARN
xwiki.MyPersistentLoginManager - Login cookie validation hash mismatch! Cookies have been tampered with
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
2010-11-18 13:57:55,471 [Lucene Index Updater] WARN lucene.AttachmentData - error getting content
of attachment [2009BEinGRIDwow2greenCONTEXTREVIEW.PPT] for document [xwiki:Documents.Presentations]
org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from
org.apache.tika.parser.microsoft.OfficeParser@72be25d1
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:138)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Cannot remove block[ 4209 ]; out of range[ 0 - 3804 ]
at org.apache.poi.poifs.storage.BlockListImpl.remove(BlockListImpl.java:98)
at org.apache.poi.poifs.storage.RawDataBlockList.remove(RawDataBlockList.java:32)
at org.apache.poi.poifs.storage.BlockAllocationTableReader.<init>(BlockAllocationTableReader.java:99)
at org.apache.poi.poifs.filesystem.POIFSFileSystem.<init>(POIFSFileSystem.java:164)
at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:74)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
... 13 more
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 4006
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 4006
2010-11-18 15:05:10,412
[http://apps.man.poznan.pl:28181/xwiki/bin/download/Documents/Presentations/…]
ERROR web.XWikiAction - Connection aborted
Unfotunately, today this situation has repeated with other group of users, the same scenario - after the
attachment submission and few edits of the page, they are gone. A snippet from the log from that period of
time ( a lot of that warnings ):
2010-11-19 10:43:37,199 [Lucene Index Updater] WARN util.PDFStreamEngine - java.io.IOException:
Error: expected hex character and not :32
java.io.IOException: Error: expected hex character and not :32
at org.apache.fontbox.cmap.CMapParser.parseNextToken(CMapParser.java:316)
at org.apache.fontbox.cmap.CMapParser.parse(CMapParser.java:138)
at org.apache.pdfbox.pdmodel.font.PDFont.parseCmap(PDFont.java:549)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:383)
at org.apache.pdfbox.util.PDFStreamEngine.processEncodedText(PDFStreamEngine.java:372)
at org.apache.pdfbox.util.operator.ShowText.process(ShowText.java:45)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.operator.Invoke.process(Invoke.java:74)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
One more from another user:
2010-11-19 10:43:37,464 [Lucene Index Updater] WARN util.PDFStreamEngine - java.io.IOException:
Error: expected hex character and not :32
java.io.IOException: Error: expected hex character and not :32
at org.apache.fontbox.cmap.CMapParser.parseNextToken(CMapParser.java:316)
at org.apache.fontbox.cmap.CMapParser.parse(CMapParser.java:138)
at org.apache.pdfbox.pdmodel.font.PDFont.parseCmap(PDFont.java:549)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:383)
at org.apache.pdfbox.util.PDFStreamEngine.processEncodedText(PDFStreamEngine.java:372)
at org.apache.pdfbox.util.operator.ShowTextGlyph.process(ShowTextGlyph.java:61)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.operator.Invoke.process(Invoke.java:74)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:142)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
2010-11-19 11:32:39,900 [Lucene Index Updater] WARN lucene.AttachmentData - error getting content
of attachment [2008BEinGRIDdesignconceptdiagramdoneinVisio.vsd] for document [xwiki:Documents.Diagrams]
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.microsoft.OfficeParser@54ad9fa4
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:134)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.IllegalArgumentException: Found a chunk with a negative length, which isn't allowed
at org.apache.poi.hdgf.chunks.ChunkFactory.createChunk(ChunkFactory.java:120)
at org.apache.poi.hdgf.streams.ChunkStream.findChunks(ChunkStream.java:59)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:93)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:100)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:100)
at org.apache.poi.hdgf.HDGFDiagram.<init>(HDGFDiagram.java:95)
at org.apache.poi.hdgf.extractor.VisioTextExtractor.<init>(VisioTextExtractor.java:52)
at org.apache.poi.hdgf.extractor.VisioTextExtractor.<init>(VisioTextExtractor.java:49)
at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:127)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
... 13 more
I'm counting on your help since I don't know it's more XWiki issue or maybe I misconfigured something.
Regards,
Piotr
hi -
I'm one of the stranded former users of the free hosted service wik.is
which mindtouch precipitously cancelled, forcing all users to either go
to one of their paid plans (which don't match my usage), or leave.
I'm going to host it myself this time, and have been looking through
the alternatives. I'm attracted to xwiki for several reasons:
full ACL support; ldap authentication; good open source license without
a bunch of proprietary features; Balsamiq and Word integrations;
wysiwyg editing; and strong REST api.
>From my first glance I do still have some reservations:
1. One concern is convenient maintenance of sorted children.
I see that in xwiki's own documentation wiki, this isn't done:
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/
is just maintained manually as an index page.
As far as I can tell, by default children are just sorted by creation order.
With the http://code.xwiki.org/xwiki/bin/view/Plugins/DocumentTreePlugin
they can instead be sorted by name.
Lastly, there is http://code.xwiki.org/xwiki/bin/view/Plugins/SortedDocumentTreePlugin
But I really don't understand the instructions for "importing" a class to get
an additional sortable attribute.
Also I don't know if these plugins support sort order of spaces too, or not.
Ideally, however it is done, when a page is created, the form would have not
only the page title and parent title, but a place for an optional sort value.
2. I'd like to be able to export an entire space as a big PDF.
I can't tell if this plugin will do that:
http://code.xwiki.org/xwiki/bin/view/Applications/PDFExportPanelApplication
For example, suppose I wanted the whol xwiki AdminGuide as a single PDF,
what would I do?
3. I'd like a real bare bones look -- even slimmer than Confluence or MediaWiki,
and both of those are a little less cluttered than the Toucan skin.
I'm not finding any example skins that are like that.
4. I like the idea of supporting office app clients.
But it seems there is some clumsiness with XOffice specifying a parent:
http://jira.xwiki.org/jira/browse/XOFFICE-243
And I can't find any documentation on editing/creating content from OpenOffice
and/or via any webdav client. Things like URL conventions, etc.
This page http://platform.xwiki.org/xwiki/bin/view/DevGuide/Architecture
mentions XOO (Xwiki OpenOffice) but there is nothing else about it:
http://www.xwiki.org/xwiki/bin/view/Main/Search?text=XOO&wikinames=&space=
5. I might be blind but I can't find any documentation for the blogging support.
6. The page http://code.xwiki.org/xwiki/bin/view/Plugins/
says "Components" are preferred over plugins but the link to "Components" goes nowhere.
In general the documentation wiki is pretty confusing. The Enterprise docs link to basically
all the same places as the platform docs. There seems to be a mixture of information that is
years out of date and information about ideas not yet implemented. Sadly it is an example
of the dangers of using a wiki :(.
-mda
Hi, I create a page with macros, then I included it on another page
(#includeMacros("Sprava.Macros"), or {{include
document="SpaceOf.DocumentToInclude"/}}). Problem is when page is rendered,
sometimes are macros proceed, but sometimes when I change page where I use
macros I got instead rendered macro only name of macro (e.g.
#getType($type)). Page with macros is not changed. Seems to me like a bug.
Another problem is when I left empty line in macro page I got sometimes in
rendered page this text:
<div class="wikimodel-emptyline"></div><div
class="wikimodel-emptyline"></div><div class="wikimodel-emptyline"></div>
Some empty lines do not affect result some yes.
Any idea?
--
View this message in context: http://xwiki.475771.n2.nabble.com/Problem-with-includeMacros-tp5792301p5792…
Sent from the XWiki- Users mailing list archive at Nabble.com.
Hi!
I'm using the following:
$xwiki.jsfx.use("js/xwiki/table/tablefilterNsort.js")
$xwiki.ssfx.use("js/xwiki/table/table.css")
When defining rules="all", and I export the table to pdf, it doesn't come
with the lines it was supposed to come with... The only thing that works is
to define border="1".
How can I get all the lines when exporting my table?
--
Atenciosamente,
Erica Usui.
What are the recommended specs for a server to run XWiki? I understand this
is quite a vague question as there are many variables to consider. I would
like to run this on a VPS so a general spec would be nice.
Thanks
- Eric