18

We have a Java webapp that we upgraded from Java 1.5.0.19 to Java 1.6.0.21

/usr/java/jdk1.6.0_21/bin/java -server -Xms2000m -Xmx3000m -XX:MaxPermSize=256m -Djava.awt.headless=true -Dwg.environment=production -Djava.io.tmpdir=/var/cache/jetty -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=31377 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/webapp -Dprogram.name=run.sh -Djava.endorsed.dirs=/opt/3p/jboss/lib/endorsed -classpath /opt/3p/jboss/bin/run.jar:/usr/java/jdk1.6.0_21/lib/tools.jar org.jboss.Main -c default

As you can see it should preallocate 2GB of heap and max out at 3GB (why we preallocate so much is because this app is ancient and poorly designed so has a ton of things to load up). The issue we have seen recently after upgrading to the 1.6 is that on occasion memory goes through the roof. While memory usage is likely an app issue the JVM is exceeding the 3GB max setup for heap. Using top I see:

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND    
8449 apache    18   0 19.6g 6.9g 5648 S  4.0 84.8  80:42.27 java             

So how could a JVM with 3GB heap, 256MB permgen, and even some overhead consume 6.9GB? Bug in the JVM that would be fixed by upgrading to build #35? Something missing on what in java could be using the extra memory? Just trying to see if anyone has seen this before.

5
  • Native code libraries are not subject to either the heap or permgen limits (for memory allocated with malloc, say). Do you use any significant native libraries? Commented Sep 6, 2012 at 3:00
  • Which OS distribution are you using? Commented Sep 6, 2012 at 3:01
  • Nope. We use a bunch of Java libs but no native libs. Commented Sep 6, 2012 at 3:19
  • what is the actual java heap size when the memory shoots up like this? Commented Sep 6, 2012 at 3:43
  • 1
    Since you're on Linix, use pmap to find out where the memory is actually going. Commented Sep 6, 2012 at 13:48

3 Answers 3

15

So how could a JVM with 3GB heap, 256MB permgen, and even some overhead consume 6.9GB?

Possible explanations include:

  • lots and lots of thread stacks,
  • memory-mapped files that are not being closed when they should be,
  • some native code library using (possibly leaking) out-of-heap memory.

I would be inclined to blame the application before blaming the JVM.

Sign up to request clarification or add additional context in comments.

5 Comments

That was my inclination as well, but I have never seen a Java process consume too much. We are not using any native libraries so not sure if that could be an issue. Don't memory mapped files count against heap? The thread stacks I will try to investigate and see how many there are.
"Don't memory mapped files count against heap?" No they don't. The mapped memory segments are allocated outside of the Java heap.
OK. Looks like thread stacks is controlled via -Xss which we do not explicitly set. Doc says this should be 1MB. Am I wrong or is does this mean we can rule out this as a possible issue? We have JMX configured so I am going to try and see if that helps at all.
You can only rule thread stacks out if you know that you don't have a large number of threads. (And spawning a huge number of threads is something that an old badly designed webapp might do. ~4000 x 1Mb is ~4Gb ...)
Just looking through the JMX stats for that time frame. Seeing ~65threads, ~1GB heap used, ~100MB non-heap, but OS memory goes very high. So not Heap so looking for hidden cases of memory-mapped files or native libs.
12

So long story short, my initial reaction was correct it was a bug in the JVM. We were using 1.6.0_21 and it turns out that we were experiencing the exact same error as outlined in https://confluence.atlassian.com/pages/viewpage.action?pageId=219023686. Upgrading to 1.6.0_37 fixed the problem and we went from daily crashes to 2 weeks without a crash.

So while the sentiment to not just blame the JVM is a good policy it seems that one should also be advised not to always assume the JVM is bug free, it like all software has the occasional bug. Plus, seems good policy to keep things up to date.

Thanks for all the help on this one!

1 Comment

Yes. The importance of keeping your JVM installs up to date. You were having problems using a Java install that was ~2 years (and 7 releases with security patches!) out of date.
2

http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/garbage_collect.html

Note that the JVM uses more memory than just the heap. For example Java methods, thread stacks and native handles are allocated in memory separate from the heap, as well as JVM internal data structures.

So if you have a lot of threads and a lot of native handles, the memory can exceed the heap limit. Are you sure this didn't happen before as well?

Also check out this: Java using more memory than the allocated memory

2 Comments

It did not happen before the 1.6 upgrade. However, there were other issues and other app changes as well. So I suspect there is indeed and app error and we are going to try and look at the heap dump we just got. While I understand there could be some extra overhead, > 3 GB of overhead seems excessive. Could threads, headless, etc really use that much?
This depends on the threads and especially their call stacks. If you use a lot of recursion in a lot of threads this might very well add up to a lot of memory (which can't be cleared by the GC)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.