Hi developers,
It would be great to see some developers be interested in this thread.
We need to better understand memory usage by XWiki in order to achieve
higher throughput with controled memory usage.
I've found an additional interesting tools to use, the Eclipse Memory
Analyzer which works with a dump retrived using the command "jmap
-heap:format=b <processid>"
(This is practical because we can get such a dump on any running VM, and
we can even configure the VM to give such a dump when hitting OutOfMemory)
It gives some interesting results. I retrieved a dump from
myxwiki.org
and analyzed it a bit
http://www.zapnews.tv/xwiki/bin/download/Admin/MemoryUsage/myxwikiorgmem.png
As the following image shows it, we have a significant amount of memory
in the velocity package in a structure meant to store all velocity macros.
It's 170Mb which represents 37% of the heap and which is more than the
document size.
I suspect that if we are able to achieve this amount we can achieve more
and reach OutOfMemory with only this module.
There is a chance that it is linked to MultiWiki usage where macros are
kept in a different context for each wiki, but it could be also
something growing regularly every time a macro is found in a page.
Even if it is growing by number of wikis, it is still potentially a
scalability issue. I already analyzed memory a long time ago and did not
see Velocity as storing a lot of information. This could be linked to
the new implementation in component mode.
Velocity+JBoss cache seem to hold at least 70% of the consumed heap.
This is clearly the area to focus on and verify that we can keep it in
control.
Ludovic
Le 07/05/10 16:50, Ludovic Dubost a écrit :
Hi developers,
A while ago I was looking for some ways to track how much memory is
used by our internal cache and was not able to find anything.
I've tried it again and this time I found the following code:
http://www.javamex.com/classmexer/
This requires a simple instrumentation to work, but I was able to get
some results out of it to measure the size of our documents in cache.
You can see the result on a personal server:
Measuring one page:
http://www.zapnews.tv/xwiki/bin/view/Admin/MemoryUsage
Measuring all pages in cache:
http://www.zapnews.tv/xwiki/bin/view/Admin/MemoryUsage?page=all
The first results I can see, is that with no surprise the items taking
most memory are:
- attachment content
- attachment archive
- archive
What I was able to see is that as expected these fields won't consume
memory until we are asking for the data.
And after a while, the memory is indeed discarded for these fields, so
the usage of SoftReferences for them seem to work.
Now what I can see is that the attachment archive can be very very
very costly in memory.
Also it does not seem clear how the memory from these fields is
garbage collected (a GC did not recover it).
With some experience of massive loading of attachments that lead to
OutOfMemory errors in the server, I do suspect that the SoftReferences
are not necessarly discarded fast enough to avoid the OutOfMemory. I
also believe that a search engine that is walking all our pages
including our archive pages can genearate important memory usage that
could lead to problems. But this is only an intuition that needs to be
proved.
I believe we need to run some testing under stress to see if the cache
and memory usage do behave properly and if the cache(s) are never able
to go over the memory usage.
We also should try the classmexer on servers that are heavily used an
be able to look at the memory usage and see if we are "controling" it.
I'm not 100% sure how intrusive the "instrumentation" module is but I
believe it's quite light.
We could try it on
xwiki.org or on
myxwiki.org.
WDYT ?
Ludovic
_______________________________________________
devs mailing list
devs(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/devs
--
Ludovic Dubost
Blog:http://blog.ludovic.org/
XWiki:http://www.xwiki.com
Skype: ldubost GTalk: ldubost
--
Ludovic Dubost
Blog:
http://blog.ludovic.org/
XWiki:
http://www.xwiki.com
Skype: ldubost GTalk: ldubost