We encountered the dreaded
java.lang.OutOfMemoryError: Java heap space error.
How often do we hit this error and look in bewilderment.
What does it means
Means there is a hell lot of objects being created and not all are being marked for GC. These stray objects are cluttering the heap space and resulting in this error. This is basically a memory leak.
also read:
- Java Tutorials
- Java EE Tutorials
- Design Patterns Tutorials
- Java File IO Tutorials
Temporary Solution
start tag jvmarg value=”-Xmx892M” end tag- you can keep increasing this value between 256 – 1024 or more but this will only delay the time after which the error is thrown.
Tools
Few tools that I used for analysing this issue
- MAT – Eclipse Memory Analyser Tool (Also recommended – has some great features to zero in on the suspect / culprit)
- IBM Heap Analyser
- JConsole
- Yourkit – trial version for 14 days (you better solve your issue within 14 days :)) [Highly recommended – it allows monitoring and a hell lot of features]
- JMAP
Analysis and Solution
- I used jmap on the unix machine and got the heap reproduced in a file called heap.bin.
- Then open the heap.bin using the free tool IBM Heap Analyser (ha36.jar). Let me know if you need it.
- Now you can view different details of the heap, like duplicate class, which objects is being created in large number or objects based on size etc.
- Now there is a option to get the leak suspects, it would list a few leak suspects – it would surely list few open source frameworks like Active MQ, Hibernate etc which means you will have to upgrade these to the latest production release from the framework providers.
- Concentrate on any leak suspect either on your classes or your packages or on any java’s Class used a lot in your class.
- I was able to find a wrongly declared and used variable related to threadpoolexecutor which was causing a lot of hanging objects. (key is to find the link between a leak suspect and the way it is related to your code)
- After fixing this I took regular heap imprints and analysed them and found that the leak suspect was missing.
- After this fix my application starting running for almost double the time but I soon hit the dreaded java.lang.OutOfMemoryError: PermGen space, about which I shall write in the next post.
- Another way of seeing the success of your fix is to take regular heap imprints and you would see that there is pattern say
heap1 = 10 MB, heap2 = 50 MB, heap3 = 100 MB, heap4 = 80 MB, Heap5 ~= 80 MB.
In the case of the error prone application / memory leak plagued application the pattern would be
heap1 = 10 MB, heap2 = 50 MB, heap3 = 100 MB, heap4 = 120 MB, Heap5 = 140 MB....
PS: Jconsole can be used to see the heap growth, perm gen growth, class loading, class loading etc at run time. All the graphs can have peaks and downs but must even out in due course, if they keep going north then you know for sure you are going to have trouble.
Few more
jmap -permstat 12345 > permstats_2.txt jmap -histo:live 12345> histo_live.txt jmap -histo 12345 > histo.txt -XX:-TraceClassLoading Trace loading of classes. -XX:-TraceClassLoadingPreorder Trace all classes loaded in order referenced (not loaded). (Introduced in 1.4.2.) -XX:-TraceClassResolution Trace constant pool resolutions. (Introduced in 1.4.2.) -XX:-TraceClassUnloading Trace unloading of classes.