Folks,
I'm testing kind of scalability of XWiki by simple benchmark which
creates N pages in a loop (one page at a time, it's not parallel run!)
and then when this loop finishes, it again in another loop gets all the
pages from the server (again serially, one page at the time). For page
creation we're using REST API, for getting the pages we're using common
browseable URL (/xwiki/bin/view/...).
Now the problem is that if I try to attempt creation of 100k pages, then
I hit Java's out of memory errors and server is unresponsive from that
time. I've tested this on:
- xwiki-jetty-hsql-6.0.0
- xwiki-jetty-hsql-6.0.1
- xwiki-tomcat7-pgsql -- debian xwiki packages running on top of debian 7.5
Of course I know the way how to increase Java's memory space/heap space.
The problem is that this will not help here. Simply if I do so and then
create 100 millions of pages on one run I will still get to the same
issue just it'll take a lot longer.
I've googled a bit for memoryleaks issues on Java and found an
interesting recommendation to use parallel GC. So I've changed
start_xwiki.sh to include -XX:+UseParallelGC in XWIKI_OPTS
Anyway, the situation is still looking suspiciously. I've connected
JConsole to the xwiki java process and overall view looks:
https://app.box.com/s/udndu96pl2fvuz3igvor
this is whole run overview, but perhaps even more clear is it on last 2
hours view which is here:
https://app.box.com/s/deuix33fzejra4uur941
Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
Now, what worries me a lot is this bottom cap which is growing. You can
see that clearly in Heap Memory Usage from 15:15. In CPU usage you can
also see that around the same time the CPU consumption went up from ~15%
to ~45%
When I switch to Memory Tab in JConsole and click several times on
"Perform GC" button, the bottom cap is still there and I cannot get
lower in memory usage. With this going on I can also see server failing
after some time on OOM error.
Any help with this is highly appreciated here.
Thanks!
Karel