I see in the administration documentation:
Encrypt cookies using IP address
Even if the password cannot be extracted from the cookie, the cookies might
be stolen See: XSS and used as they are.
By setting the xwiki.cfg parameter xwiki.authentication.useip to true you
can block the cookies from being used except by the same ip address which
got them.
But when I look in xwiki.cfg, there is no mention of useip. Is this option
still recommended for use?
thanks
Paul
Hi all,
I see in the "Access Rights" documentation that there are 3 ways it can be
set up:
Open Wiki
Public Wiki
Public Wiki with confirmed registration
All of those options allow the user to register without forcing the admin to
confirm the registration.
I don't want users to be able to register themselves. I have a small set of
special users and I want to be able to register them manually, or at least
have to confirm their registration before an account is created for them.
any normal visitor should not be able to modify anything on the website, and
that includes registering themselves.
is this possible?
thanks
Paul
Hi
Our XWIKI is multi-language.
When we translate a document into other languages how are these documents stored?
The issue I have is that when we update the original it is not easy to remove the incorrect translations
How do I remove only a translation without removing the default language (or all the translations at once, without the original)?
Gerritjan
Hi,
today I've noticed that something bad had happen to some of the attachments in my XWiki, here is a
screenshot from one of the affected pages:
http://i.imgur.com/p6Xs7.png
Take a look, a couple of attachments have been uploaded but only one is displayed in the attachment tab.
Person who uploaded them claims that yesterday they were ok, but today somehow they disappeared.
It's weird that there is no trace of any operation on them after the uploading phase.
I'm using XWiki Enterprise 2.5.32127 with MySQL data base (Server version 5.1.47).
To add more context, last days my users started to add more attachements to their pages. Currently the
database after the dump is around 200 MB large.
Also looked at the logs and found several interesting fragments ( all of the log snippets are from the time
this have been noticed ):
2010-11-18 09:03:09,355
[http://apps.man.poznan.pl:28181/xwiki/bin/download/Documents/Proposals/2009…]
ERROR web.XWikiAction - Connection aborted
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
2010-11-18 13:23:53,118 [http://localhost:28181/xwiki/bin/view/Projects/Opinion+Mining] WARN
xwiki.MyPersistentLoginManager - Login cookie validation hash mismatch! Cookies have been tampered with
2010-11-18 13:23:53,119 [http://localhost:28181/xwiki/bin/view/Projects/Opinion+Mining] WARN
xwiki.MyPersistentLoginManager - Login cookie validation hash mismatch! Cookies have been tampered with
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
2010-11-18 13:57:55,471 [Lucene Index Updater] WARN lucene.AttachmentData - error getting content
of attachment [2009BEinGRIDwow2greenCONTEXTREVIEW.PPT] for document [xwiki:Documents.Presentations]
org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from
org.apache.tika.parser.microsoft.OfficeParser@72be25d1
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:138)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Cannot remove block[ 4209 ]; out of range[ 0 - 3804 ]
at org.apache.poi.poifs.storage.BlockListImpl.remove(BlockListImpl.java:98)
at org.apache.poi.poifs.storage.RawDataBlockList.remove(RawDataBlockList.java:32)
at org.apache.poi.poifs.storage.BlockAllocationTableReader.<init>(BlockAllocationTableReader.java:99)
at org.apache.poi.poifs.filesystem.POIFSFileSystem.<init>(POIFSFileSystem.java:164)
at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:74)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
... 13 more
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 3999
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 4006
Found a TextHeaderAtom not followed by a TextBytesAtom or TextCharsAtom: Followed by 4006
2010-11-18 15:05:10,412
[http://apps.man.poznan.pl:28181/xwiki/bin/download/Documents/Presentations/…]
ERROR web.XWikiAction - Connection aborted
Unfotunately, today this situation has repeated with other group of users, the same scenario - after the
attachment submission and few edits of the page, they are gone. A snippet from the log from that period of
time ( a lot of that warnings ):
2010-11-19 10:43:37,199 [Lucene Index Updater] WARN util.PDFStreamEngine - java.io.IOException:
Error: expected hex character and not :32
java.io.IOException: Error: expected hex character and not :32
at org.apache.fontbox.cmap.CMapParser.parseNextToken(CMapParser.java:316)
at org.apache.fontbox.cmap.CMapParser.parse(CMapParser.java:138)
at org.apache.pdfbox.pdmodel.font.PDFont.parseCmap(PDFont.java:549)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:383)
at org.apache.pdfbox.util.PDFStreamEngine.processEncodedText(PDFStreamEngine.java:372)
at org.apache.pdfbox.util.operator.ShowText.process(ShowText.java:45)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.operator.Invoke.process(Invoke.java:74)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
One more from another user:
2010-11-19 10:43:37,464 [Lucene Index Updater] WARN util.PDFStreamEngine - java.io.IOException:
Error: expected hex character and not :32
java.io.IOException: Error: expected hex character and not :32
at org.apache.fontbox.cmap.CMapParser.parseNextToken(CMapParser.java:316)
at org.apache.fontbox.cmap.CMapParser.parse(CMapParser.java:138)
at org.apache.pdfbox.pdmodel.font.PDFont.parseCmap(PDFont.java:549)
at org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:383)
at org.apache.pdfbox.util.PDFStreamEngine.processEncodedText(PDFStreamEngine.java:372)
at org.apache.pdfbox.util.operator.ShowTextGlyph.process(ShowTextGlyph.java:61)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.operator.Invoke.process(Invoke.java:74)
at org.apache.pdfbox.util.PDFStreamEngine.processOperator(PDFStreamEngine.java:552)
at org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:248)
at org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:142)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
2010-11-19 11:32:39,900 [Lucene Index Updater] WARN lucene.AttachmentData - error getting content
of attachment [2008BEinGRIDdesignconceptdiagramdoneinVisio.vsd] for document [xwiki:Documents.Diagrams]
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.microsoft.OfficeParser@54ad9fa4
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:134)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at org.apache.tika.Tika.parseToString(Tika.java:267)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getContentAsText(AttachmentData.java:161)
at com.xpn.xwiki.plugin.lucene.AttachmentData.getFullText(AttachmentData.java:136)
at com.xpn.xwiki.plugin.lucene.IndexData.getFullText(IndexData.java:190)
at com.xpn.xwiki.plugin.lucene.IndexData.addDataToLuceneDocument(IndexData.java:146)
at com.xpn.xwiki.plugin.lucene.AttachmentData.addDataToLuceneDocument(AttachmentData.java:65)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.addToIndex(IndexUpdater.java:296)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.updateIndex(IndexUpdater.java:237)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runMainLoop(IndexUpdater.java:171)
at com.xpn.xwiki.plugin.lucene.IndexUpdater.runInternal(IndexUpdater.java:153)
at com.xpn.xwiki.util.AbstractXWikiRunnable.run(AbstractXWikiRunnable.java:99)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.IllegalArgumentException: Found a chunk with a negative length, which isn't allowed
at org.apache.poi.hdgf.chunks.ChunkFactory.createChunk(ChunkFactory.java:120)
at org.apache.poi.hdgf.streams.ChunkStream.findChunks(ChunkStream.java:59)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:93)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:100)
at org.apache.poi.hdgf.streams.PointerContainingStream.findChildren(PointerContainingStream.java:100)
at org.apache.poi.hdgf.HDGFDiagram.<init>(HDGFDiagram.java:95)
at org.apache.poi.hdgf.extractor.VisioTextExtractor.<init>(VisioTextExtractor.java:52)
at org.apache.poi.hdgf.extractor.VisioTextExtractor.<init>(VisioTextExtractor.java:49)
at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:127)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:132)
... 13 more
I'm counting on your help since I don't know it's more XWiki issue or maybe I misconfigured something.
Regards,
Piotr
hi -
I'm one of the stranded former users of the free hosted service wik.is
which mindtouch precipitously cancelled, forcing all users to either go
to one of their paid plans (which don't match my usage), or leave.
I'm going to host it myself this time, and have been looking through
the alternatives. I'm attracted to xwiki for several reasons:
full ACL support; ldap authentication; good open source license without
a bunch of proprietary features; Balsamiq and Word integrations;
wysiwyg editing; and strong REST api.
>From my first glance I do still have some reservations:
1. One concern is convenient maintenance of sorted children.
I see that in xwiki's own documentation wiki, this isn't done:
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/
is just maintained manually as an index page.
As far as I can tell, by default children are just sorted by creation order.
With the http://code.xwiki.org/xwiki/bin/view/Plugins/DocumentTreePlugin
they can instead be sorted by name.
Lastly, there is http://code.xwiki.org/xwiki/bin/view/Plugins/SortedDocumentTreePlugin
But I really don't understand the instructions for "importing" a class to get
an additional sortable attribute.
Also I don't know if these plugins support sort order of spaces too, or not.
Ideally, however it is done, when a page is created, the form would have not
only the page title and parent title, but a place for an optional sort value.
2. I'd like to be able to export an entire space as a big PDF.
I can't tell if this plugin will do that:
http://code.xwiki.org/xwiki/bin/view/Applications/PDFExportPanelApplication
For example, suppose I wanted the whol xwiki AdminGuide as a single PDF,
what would I do?
3. I'd like a real bare bones look -- even slimmer than Confluence or MediaWiki,
and both of those are a little less cluttered than the Toucan skin.
I'm not finding any example skins that are like that.
4. I like the idea of supporting office app clients.
But it seems there is some clumsiness with XOffice specifying a parent:
http://jira.xwiki.org/jira/browse/XOFFICE-243
And I can't find any documentation on editing/creating content from OpenOffice
and/or via any webdav client. Things like URL conventions, etc.
This page http://platform.xwiki.org/xwiki/bin/view/DevGuide/Architecture
mentions XOO (Xwiki OpenOffice) but there is nothing else about it:
http://www.xwiki.org/xwiki/bin/view/Main/Search?text=XOO&wikinames=&space=
5. I might be blind but I can't find any documentation for the blogging support.
6. The page http://code.xwiki.org/xwiki/bin/view/Plugins/
says "Components" are preferred over plugins but the link to "Components" goes nowhere.
In general the documentation wiki is pretty confusing. The Enterprise docs link to basically
all the same places as the platform docs. There seems to be a mixture of information that is
years out of date and information about ideas not yet implemented. Sadly it is an example
of the dangers of using a wiki :(.
-mda
Hi, I create a page with macros, then I included it on another page
(#includeMacros("Sprava.Macros"), or {{include
document="SpaceOf.DocumentToInclude"/}}). Problem is when page is rendered,
sometimes are macros proceed, but sometimes when I change page where I use
macros I got instead rendered macro only name of macro (e.g.
#getType($type)). Page with macros is not changed. Seems to me like a bug.
Another problem is when I left empty line in macro page I got sometimes in
rendered page this text:
<div class="wikimodel-emptyline"></div><div
class="wikimodel-emptyline"></div><div class="wikimodel-emptyline"></div>
Some empty lines do not affect result some yes.
Any idea?
--
View this message in context: http://xwiki.475771.n2.nabble.com/Problem-with-includeMacros-tp5792301p5792…
Sent from the XWiki- Users mailing list archive at Nabble.com.
Hi!
I'm using the following:
$xwiki.jsfx.use("js/xwiki/table/tablefilterNsort.js")
$xwiki.ssfx.use("js/xwiki/table/table.css")
When defining rules="all", and I export the table to pdf, it doesn't come
with the lines it was supposed to come with... The only thing that works is
to define border="1".
How can I get all the lines when exporting my table?
--
Atenciosamente,
Erica Usui.
What are the recommended specs for a server to run XWiki? I understand this
is quite a vague question as there are many variables to consider. I would
like to run this on a VPS so a general spec would be nice.
Thanks
- Eric
Hello xwiki users,
I just installed xwiki on windows server. I unpacked the stand alone
version Stable: xwiki-enterprise-jetty-hsqldb-2.6.zip into a directory
called XWikiHome and setup MySQL data base, created a new user. When I am
in the server itself, I can login. But when I try to login from another
computer I cannot login and the xwiki creates a message:
WARN xwiki.MyPersistentLoginManager - Login cookie validation hash mismatch!
Cookies have been tampered with
I have seen a few other emails regarding the same issue. But I don't see a
solution. One suggestion was to change cookie version
to cookie.setVersion(1); but I am not sure where this change needs to be
made? My XWikiHome folder contains the following folders and files:
XWikiHome:
database (folder)
jetty (folder)
META-INF (folder)
webapps (folder)
start_xwiki
start_xwiki
start_xwiki_debug
start_xwiki_debug
stop_xwiki
stop_xwiki
xwiki
Within the folders, I don't see any configuration file or a file like a
"..LoginManager..". Did I not install this correctly? But I can actually
open the pages when I am in the server from internet explorer. But from
another computer, I get the login page but cannot login. As a side note, I
have tried version 2.5 last week and I did not have this problem. Then I
wanted to change the location of the xwiki, so I deleted that directory and
reinstalled the 2.6 version into another drive.
I appreciate your suggestions. Thanks, - Nevzat
Hi,
lately a user reported to me another issue regarding sections.
Scenario, there are two users and document like that:
== section 1 ==
sect content
== section 2 ==
sect content
== section 3 ==
sect content
1. UserA opens section 2 to edit
2. UserB opens whole document to edit ( no other way but with force editing )
3. UserB adds section 1.5 in the way it is:
== section 1 ==
sect content
== section 1.5 ==
sect content
== section 2 ==
sect content
== section 3 ==
sect content
4. UserB saves document.
5. UserA changes content of the section 2 and saves document.
The result is that userA saves overwrites section 'section 1.5' instead of 'section 2'.
I can see that you're indexing section with numbers and add this as a parameter to edit link and that is
probably the reason of shifting the section change but I've just checked the Media wiki sandbox and the same
scenario causes no errors - they are also adding the 'section' parameter but somehow an extra identification
is performed as well..
Is there any chance to get that working right ? maybe some configuration switch..
Thanks,
Piotr