Thanks for the prompt response.
The answer regarding to large attachments is no. I don't have any large attachments in
any of my pages.
The "out-of-memory" error happens without any page in common, but when that
happens, any other page that I want to access will result with the same error. It is like
once memory has run out, it is unrecoverable.
You may be right regarding to the DB connections threads not receiving the shutdown
message, but that does not explain why i can't properly shutdown tomcat with XWiki
deployed. Again, Tomcat shutdows smoothly when XWiki is not present. I don't mind not
being able to gracefully shutdown tomcat, since i can always do "kill -9
<pid>", but what annoys me is the out-of-memory error after certain time.
What could be the problem? This is happening on my development machine, but I am going to
build another Linux machine right now and try if I can replicate the problem.
Richard
Date: Thu, 29 May 2008 19:46:20 +0200
From: sergiu(a)xwiki.com
To: users(a)xwiki.org
Subject: Re: [xwiki-users] Xwiki memory leak?
Richard V. wrote:
Hi XWiki users,
I have encountered a problem where after running XWiki on my tomcat server for a week, i
get an java.lang.OutOfMemoryError: PermGen space error. It appears that xwiki consumed all
the java heap space. I believe the problem may be related to unreleased DB connections
because, I noticed that everytime I try to shutdown tomcat, it hangs with xwiki deployed,
also by running "ps aux" i see the postgresql connection processes belonging to
xwiki still running. I tried deploying another application that uses hibernate +
postgresql on the same tomcat running xwiki, and upon shutting down the server, all db
connection processes from the other application gracefully terminate but not the ones from
xwiki.
My question is does anyone ever had this problem before? If so what is the solution?
Solutions that I have tried but did NOT work:
1- Increase java heap with -Xmx512m
2- Reduce the maximum and minimum idle DB connections
My system specs:
OS: Linux ubuntu kernel 2.6.20
Java: 1.6.0_05-13
Xwiki: 1.4 (binary distribution)
Posgresql: 8.2.6
Total RAM: 1Gb
AFAIK, there isn't a leak, but a somewhat "normal" behavior caused by
the cache settings. The cache is configured to be a fixed size LRU
cache, so if you store large entries in the cache, they will be kept
there. The OOM error appears mostly when there are large attachments in
the database. You should either reduce the size of the attachment cache
or increase the memory.
The shutdown problem is a different thing, it doesn't have anything to
do with the database connections. I think it is because of some threads
we're spawning that don't receive the shutdown message (the quartz
scheduler for example). A memory leak regarding DB connections would
mean there are thousands of open connections to the database, but I am
pretty sure that's not the case, right?
So, the main question is: Are there many & pretty large attachments in
the wiki?
--
Sergiu Dumitriu
http://purl.org/net/sergiu/
_______________________________________________
users mailing list
users(a)xwiki.org
http://lists.xwiki.org/mailman/listinfo/users
_________________________________________________________________
E-mail for the greater good. Join the i’m Initiative from Microsoft.