Hi everyone,
Thanks for the tips and clarification about the PermGen out of memory error. I did a bit
of research on that myself and found out that the problems is bigger than I initially
thought. The source of the problem is not directly coming from XWiki code, but from the
supporting libraries: Hibernate, Tomcat, and Sun's JDK 1.6.
Here is a link that explain a bit about the problem:
Regarding to the solution, I could not pin point the exact library that caused the error,
but i suspect is related Tomcat's class loader. I know that by raising the permanent
geration memory (-XX:MaxPermSize) would not fix the leak, it will just delays the out of
memory error. So, I have temporally moved to a Jetty server and hope it stays up longer
and more stable.
Richard
Date: Thu, 29 May 2008 21:21:58 +0200
From: sergiu(a)xwiki.com
To: users(a)xwiki.org
Subject: Re: [xwiki-users] Xwiki memory leak?
Pavel wrote:
Notice that PermGen space is not java heap.
Therefore I suppose it is unlikely to have anything to do with attachment.
Not in this case.
You may want to check if other web applications are deployed to your tomcat
instance - and if they use many classes/libs.
Then you may want to increase the "-XX:MaxPermSize" or undeploy some
webapps.
You are right, PermGen has nothing to do with objects used by the XWiki
platform. So, either the default size is not enough to hold many
applications, or there is a bug in one of the libraries we're using that
creates classes (Groovy? Hibernate+CGLib?). For the moment try
increasing the -XX:MaxPermSize to 128M and see if the problem persists.
Pavel
On Thu, May 29, 2008 at 9:16 PM, Richard V. <xgcom(a)hotmail.com> wrote:
> Thanks for the prompt response.
>
> The answer regarding to large attachments is no. I don't have any large
> attachments in any of my pages.
>
> The "out-of-memory" error happens without any page in common, but when
that
> happens, any other page that I want to access will result with the same
> error. It is like once memory has run out, it is unrecoverable.
>
> You may be right regarding to the DB connections threads not receiving the
> shutdown message, but that does not explain why i can't properly shutdown
> tomcat with XWiki deployed. Again, Tomcat shutdows smoothly when XWiki is
> not present. I don't mind not being able to gracefully shutdown tomcat,
> since i can always do "kill -9 <pid>", but what annoys me is the
> out-of-memory error after certain time.
>
> What could be the problem? This is happening on my development machine, but
> I am going to build another Linux machine right now and try if I can
> replicate the problem.
>
> Richard
>
>> Date: Thu, 29 May 2008 19:46:20 +0200
>> From: sergiu(a)xwiki.com
>> To: users(a)xwiki.org
>> Subject: Re: [xwiki-users] Xwiki memory leak?
>>
>> Richard V. wrote:
>>> Hi XWiki users,
>>>
>>> I have encountered a problem where after running XWiki on my tomcat
> server for a week, i get an java.lang.OutOfMemoryError: PermGen space error.
> It appears that xwiki consumed all the java heap space. I believe the
> problem may be related to unreleased DB connections because, I noticed that
> everytime I try to shutdown tomcat, it hangs with xwiki deployed, also by
> running "ps aux" i see the postgresql connection processes belonging to
> xwiki still running. I tried deploying another application that uses
> hibernate + postgresql on the same tomcat running xwiki, and upon shutting
> down the server, all db connection processes from the other application
> gracefully terminate but not the ones from xwiki.
>>> My question is does anyone ever had this problem before? If so what is
> the solution?
>>> Solutions that I have tried but did NOT work:
>>> 1- Increase java heap with -Xmx512m
>>> 2- Reduce the maximum and minimum idle DB connections
>>>
>>> My system specs:
>>>
>>> OS: Linux ubuntu kernel 2.6.20
>>> Java: 1.6.0_05-13
>>> Xwiki: 1.4 (binary distribution)
>>> Posgresql: 8.2.6
>>> Total RAM: 1Gb
>>>
>> AFAIK, there isn't a leak, but a somewhat "normal" behavior caused
by
>> the cache settings. The cache is configured to be a fixed size LRU
>> cache, so if you store large entries in the cache, they will be kept
>> there. The OOM error appears mostly when there are large attachments in
>> the database. You should either reduce the size of the attachment cache
>> or increase the memory.
>>
>> The shutdown problem is a different thing, it doesn't have anything to
>> do with the database connections. I think it is because of some threads
>> we're spawning that don't receive the shutdown message (the quartz
>> scheduler for example). A memory leak regarding DB connections would
>> mean there are thousands of open connections to the database, but I am
>> pretty sure that's not the case, right?
>>
>> So, the main question is: Are there many & pretty large attachments in
>> the wiki?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
_______________________________________________
users mailing list
users(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/users
_________________________________________________________________
Change the world with e-mail. Join the i’m Initiative from Microsoft.