The standalone .zip package is not designed to hold many pages. It uses
an in-memory database that requires as much heap space as the amount of
data that you have (plus all the other memory that XWiki normally
requires). I thought there was a bigger warning on the download page
that clarified that the standalone package is only supposed to be used
for small tests...
The pgsql package should behave better, though, since it separates the
database from the live objects, except that you need to make sure
Tomcat's default memory is increased.
On 05/28/2014 12:02 PM, Karel Gardas wrote:
Thomas,
thanks for your fast response. My comments are below.
On 05/28/14 05:27 PM, Thomas Mortagne wrote:
You are mixing several things here. It's not
because you hit Out of
memory errors that you have a memory leak, it can simply mean that you
have a document cache too big for the memory you allocated for
example. You can modify the documents cache size in xwiki.cfg.
I see two caches in that file:
xwiki.store.cache
xwiki.render.cache
both seems to be commented out.
How much memory did you allocated ? XWiki is not
a small beast and it
require a minimum to work. See
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.
To prevent misunderstanding. I've not set any limit on memory etc. I
just simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed
on
xwiki.org. This distro does have 512MB RAM cap which following your
link above should be good for medium installs.
The question is, if the value of caches above are commented out in
xwiki.cfg, then what are actual default values which are used in the
xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
value I shall go lower...
Thanks!
Karel
>
> On Wed, May 28, 2014 at 4:58 PM, Karel
> Gardas<karel.gardas(a)centrum.cz> wrote:
>>
>> Folks,
>>
>> I'm testing kind of scalability of XWiki by simple benchmark which
>> creates N
>> pages in a loop (one page at a time, it's not parallel run!) and then
>> when
>> this loop finishes, it again in another loop gets all the pages from the
>> server (again serially, one page at the time). For page creation
>> we're using
>> REST API, for getting the pages we're using common browseable URL
>> (/xwiki/bin/view/...).
>> Now the problem is that if I try to attempt creation of 100k pages,
>> then I
>> hit Java's out of memory errors and server is unresponsive from that
>> time.
>> I've tested this on:
>>
>> - xwiki-jetty-hsql-6.0.0
>> - xwiki-jetty-hsql-6.0.1
>> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of
>> debian 7.5
>>
>> Of course I know the way how to increase Java's memory space/heap
>> space. The
>> problem is that this will not help here. Simply if I do so and then
>> create
>> 100 millions of pages on one run I will still get to the same issue just
>> it'll take a lot longer.
>>
>> I've googled a bit for memoryleaks issues on Java and found an
>> interesting
>> recommendation to use parallel GC. So I've changed start_xwiki.sh to
>> include
>> -XX:+UseParallelGC in XWIKI_OPTS
>>
>> Anyway, the situation is still looking suspiciously. I've connected
>> JConsole
>> to the xwiki java process and overall view looks:
>>
>>
https://app.box.com/s/udndu96pl2fvuz3igvor
>>
>> this is whole run overview, but perhaps even more clear is it on last 2
>> hours view which is here:
>>
>>
https://app.box.com/s/deuix33fzejra4uur941
>>
>> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>>
>> Now, what worries me a lot is this bottom cap which is growing. You
>> can see
>> that clearly in Heap Memory Usage from 15:15. In CPU usage you can
>> also see
>> that around the same time the CPU consumption went up from ~15% to ~45%
>>
>> When I switch to Memory Tab in JConsole and click several times on
>> "Perform
>> GC" button, the bottom cap is still there and I cannot get lower in
>> memory
>> usage. With this going on I can also see server failing after some
>> time on
>> OOM error.
>>
>> Any help with this is highly appreciated here.
>>
>> Thanks!
>> Karel