Oracle JRockit BookThis book is the result of an amazing series of events. In high school, back in the pre-Internet era, the authors used to hang out at the same bulletin board systems and found each other in a particularly geeky thread about math problems. Bulletin board friendship led to friendship in real life, as well as several collaborative software projects. Eventually, both authors went on to study at the Royal Institute of Technology (KTH) in Stockholm.
also read:
- Java Tutorials
- Java EE Tutorials
- Design Patterns Tutorials
- Java File IO Tutorials
More friends were made at KTH, and a course in database systems in our third year brought enough people with a similar mindset together to achieve critical mass. The decision was made to form a consulting business named Appeal Software Solutions (the acronym A.S.S. seemed like a perfectly valid choice at the time). Several of us started to work alongside our studies and a certain percentage of our earnings was put away so that the business could be bootstrapped into a full-time occupation when everyone was out of university. Our long-term goal was always to work with product development, not consulting. However, at the time we did not know what the products would turn out to be.
In 1997, Joakim Dahlstedt, Fredrik Stridsman and Mattias Joelson won a trip to one of the first JavaOne conferences by out-coding everyone in a Sun sponsored competition for university students. For fun, they did it again the next year with the same result.
It all started when our three heroes noticed that between the two JavaOne conferences in 1997 and 1998, the presentation of Sun’s adaptive virtual machine HotSpot remained virtually unchanged. HotSpot, it seemed at the time, was the answer to the Java performance problem. Java back then was mostly an interpreted language and several static compilers for Java were on the market, producing code that ran faster than bytecode, but that usually violated the language semantics in some fundamental way.
As this book will stress again and again, the potential power of an adaptive runtime approach exceeds, by far, that of any ahead-of-time solution, but is harder to achieve. Since there were no news about HotSpot in 1998, youthful hubris caused us to ask ourselves “How
hard can it be? Let’s make a better adaptive VM, and faster!” We had the right academic backgrounds and thought we knew in which direction to go. Even though it definitely was more of a challenge than we expected, we would still like to remind the reader that in
1998, Java on the server side was only just beginning to take off, J2EE hardly existed and no one had ever heard of a JSP. The problem domain was indeed a lot smaller in 1998.
The original plan was to have a proof of concept implementation of our own JVM finished in a year, while running the consulting business at the same time to finance the JVM development. The JVM was originally christened “RockIT”, being both rock ‘n’ roll, rock solid and IT. A leading “J” was later added for trademark reasons.
Naturally, after a few false starts, we needed to bring in venture capital. Explaining how to capitalize on an adaptive runtime (that the competitors gave away their own free versions of) provided quite a challenge. Not just because this was 1998, and investors had trouble understanding any venture not ultimately designed to either (1) send text messages with advertisements to cell phones or (2) start up a web-based mail order company.
Eventually, venture capital was secured and in early 2000, the first prototype of JRockit 1.0 went public. JRockit 1.0, besides being, as someone on the Internet put it “very 1.0”, made some headlines by being extremely fast at things like multi-threaded server applications. Further venture capital was acquired using this as leverage. The consulting business was broken out into a separate corporation and Appeal Software Solutions was renamed Appeal Virtual Machines. Sales people were hired and we started negotiations with Sun for a Java license.
Thus, JRockit started taking up more and more of our time. In 2001, the remaining engineers working in the consulting business, which had also grown, were all finally absorbed into the full-time JVM project and the consulting company was mothballed. At this time we realized that we both knew exactly how to take JRockit to the next level and that our burn rate was too high. Management started looking for a suitor in the form of a larger company to marry.
In February 2002, BEA Systems acquired Appeal Virtual Machines, letting nervous venture capitalists sleep at night, and finally securing us the resources that we needed for a proper research and development lab. A good-sized server hall for testing was built, requiring reinforced floors and more electricity than was available in our building. For quite a while, there was a huge cable from a junction box on the street outside coming in through the server room window. After some time, we outgrew that lab as well and had to rent another site to host some of our servers. As part of the BEA platform, JRockit matured considerably.
The first two years at BEA, plenty of the value-adds and key differentiators between JRockit and other Java solutions were invented, for example the framework that was later to become JRockit Mission Control. Several press releases, world-beating benchmark scores, and a virtualization platform quickly followed. With JRockit, BEA turned into one of the “big three” JVM vendors on the market, along with Sun and IBM, and a customer base of thousands of users developed. A celebration was in order when JRockit started generating revenue, first from the tools suite and later from the unparalleled GC performance provided by the JRockit Real Time product.
In 2008, BEA was acquired by Oracle, which caused some initial concerns, but JRockit and the JRockit team ended up getting a lot of attention and appreciation.
For many years now, JRockit has been running mission-critical applications all over the world. We are proud to have been part of the making of a piece of software with that kind of market penetration and importance. We are equally proud to have gone from a pre-alpha designed by six guys in a cramped office in the Old Town of Stockholm to a worldclass product with a world-class product organization.
The contents of this book stems from more than a decade of our experience with adaptive runtimes in general, and with JRockit in particular. Plenty of the information in this book has, to our knowledge, never been published anywhere before.
We hope you will find it both useful and educational!
What This Book Covers
Chapter 1: Getting Started. This chapter introduces the JRockit JVM and JRockit Mission Control. Explains how to obtain the software and what the support matrix is for different platforms. We point out things to watch out for when migrating between JVMs from different vendors, and explain the versioning scheme for JRockit and JRockit Mission control. We also give pointers to resources where further information and assistance can be found.
Chapter 2: Adaptive Code Generation. Code generation in an adaptive runtime is introduced. We explain why adaptive code generation is both harder to do in a JVM than in a static environment as well as why it is potentially much more powerful. The concept of “gambling” for performance is introduced. We examine the JRockit code generation and optimization pipeline and walk through it with an example. Adaptive and classic code optimizations are discussed. Finally, we introduce various flags and directive files that can be used to control code generation in JRockit.
Chapter 3: Adaptive Memory Management. Memory management in an adaptive runtime is introduced. We explain how a garbage collector works, both by looking at the concept of automatic memory management as well as at specific algorithms. Object allocation in a JVM is covered in some detail, as well as the meta-info needed for a garbage collector to do its work. The latter part of the chapter is dedicated to the most important Java APIs for controlling memory management. We also introduce the JRockit Real Time product, which can produce deterministic latencies in a Java application. Finally, flags for controlling the JRockit JVM memory management system are introduced.
Chapter 4: Threads and Synchronization. Threads and synchronization are very important building blocks in Java and a JVM. We explain how these concepts work in the Java language and how they are implemented in the JVM. We talk about the need for a Java Memory Model and the intrinsic complexity it brings. Adaptive optimization based on runtime feedback is done here as well as in all other areas of the JVM. A few important anti-patterns such as double-checked locking are introduced, along with common pitfalls in parallel programming. Finally we discuss how to do lock profiling in JRockit and introduce flags that control the thread system.
Chapter 5: Benchmarking and Tuning. The relevance of benchmarking and the importance of performance goals and metrics is discussed. We explain how to create an appropriate benchmark for a particular problem set. Some industrial benchmarks for Java are introduced. Finally, we discuss in detail how to modify application and JVM behavior based on benchmark feedback. Extensive examples of useful command-line flags for the JRockit JVM are given.
Chapter 6: JRockit Mission Control. The JRockit Mission Control tools suite is introduced. Startup and configuration details for different setups are given. We explain how to run JRockit Mission Control in Eclipse, along with tips on how to configure JRockit to run Eclipse itself. The different tools are introduced and common terminology is established. Various ways to enable JRockit Mission Control to access a remotely running JRockit, together with trouble-shooting tips, are provided.
Chapter 7: The Management Console. This chapter is about the Management Console component in JRockit Mission Control. We introduce the concept of diagnostic commands and online monitoring of a JVM instance. We explain how trigger rules can be set, so that notifications can be given upon certain events. Finally, we show how to extend the Management Console with custom components.
Chapter 8: The Runtime Analyzer. The JRockit Runtime Analyzer (JRA) is introduced. The JRockit Runtime Analyzer is an on-demand profiling framework that produces detailed recordings about the JVM and the application it is running. The recorded profile can later be analyzed offline, using the JRA Mission Control plugin. Recorded data includes profiling of methods and locks, as well as garbage collection information, optimization decisions, object statistics, and latency events. You will learn how to detect some common problems in a JRA recording and how the latency analyzer works.
Chapter 9: The Flight Recorder. The JRockit Flight Recorder has superseded JRA in newer versions of the JRockit Mission Control suite. This chapter explains the features that have been added that facilitate even more verbose runtime recordings. Differences in functionality and GUI are covered.
Chapter 10: The Memory Leak Detector. This chapter introduces the JRockit Memory Leak Detector, the final tool in the JRockit Mission Control tools suite. We explain the concept of a memory leak in a garbage collected language and discuss several use cases for the Memory Leak Detector. Not only can it be used to find unintentional object retention in a Java application, but it also works as a generic heap analyzer. Some of the internal implementation details are given, explaining why this tool also runs with a very low overhead.
Chapter 11: JRCMD. The command-line tool JRCMD is introduced. JRCMD enables a user to interact with all JVMs that are running on a particular machine and to issue them diagnostic commands. The chapter has the form of a reference guide and explains the most important available diagnostic commands. A diagnostic command can be used to examine or modify the state of a running JRockit JVM
Chapter 12: Using the JRockit Management APIs. This chapter explains how to programmatically access some of the functionality in the JRockit JVM. This is the way the JRockit Mission Control suite does it. The APIs JMAPI and JMXMAPI are introduced. While they are not fully officially supported, several insights can be gained about the inner mechanisms of the JVM by understanding how they work. We encourage you to experiment with your own setup.
Chapter 13: JRockit Virtual Edition. We explain virtualization in a modern “cloud-based” environment. We introduce the product JRockit Virtual Edition. Removing the OS layer from a virtualized Java setup is less problematic than one might think. It can also help
getting rid of some of the runtime overhead tha is typically associated with virtualization. We go on to explain how potentially this can even reduce Java virtualization overhead to levels not possible even on physical hardware.
The Memory Leak Detector
As described in the chapter on memory management, the Java runtime provides a simplified memory model for the programmer. The developer does not need to reserve memory from the operating system for storing data, nor does he need to worry about returning the memory once the data is no longer in use.
Working with a garbage collected language could easily lead to the hasty conclusion that resource management is a thing of the past, and that memory leaks are impossible. Nothing could be further from the truth. In fact, memory leaks are so common in Java production systems that many IT departments have surrendered. Recurring scheduled restarts of Java production systems are now all too common.
In this chapter, you will learn:
- What we mean by a Java memory leak
- How to detect a memory leak
- How to find the cause of a memory leak using the JRockit Memory Leak Detector
A Java memory leak
Whenever allocated memory is no longer in use in a program, it should be returned to the system. In a garbage collected language such as Java, quite contrary to static languages such as C, the developer is free from the burden of doing this explicitly. However, regardless of paradigm, whenever allocated memory that is no longer in use is not returned to the system, we get the dreaded memory leak. Eventually, enough memory leaks in a program will cause it to run out of memory and break.
Memory leaks in static languages
In static languages, memory management may be even more complex than just recognizing the need to explicitly free allocated memory. We must also know when it is possible to deallocate memory without breaking other parts of the application. In the absence of automatic memory management, this can sometimes be difficult. For example, let’s say there is a service from which address records can be retrieved. An address is stored as a data structure in memory for easy access. If modules A, B, and C use this address service, they may all concurrently reference the same address structure for a record.
If one of the modules decides to free the memory of the record once it is done, all the other modules will fail and the program will crash. Consequently, we need a firm allocation and deallocation discipline, possibly combined with some mechanism to let the service know once every module is done with the address record. Until this is the case, it cannot be explicitly freed. As has been previously discussed, one approach is to manually implement some sort of reference counting in the record itself to ensure that it can be reclaimed once all modules are finished with it. This may in turn require synchronization and will add complexity to the program. To put it simply, sometimes, in order to achieve proper memory hygiene in static languages, the programmer may have to implement code that behaves almost like a garbage collector.
Memory leaks in garbage collected languages
In Java, or any garbage collected language, this complexity goes away. The programmer is free to create objects and the garbage collector is responsible for reclaiming them. In our hypothetical program, once the address record is no longer in use, the garbage collector can reclaim its memory. However, even with automatic memory management, there can still be memory leaks. This is the case if references to objects that are no longer used in the program are still kept alive.
The authors once heard of a memory leak in Java being referred to as an unintentional object retention. This is a pretty good name. The program is keeping a reference to an object that should not be referenced anymore. There are many different situations where this can occur.
Perhaps the leaked object has been put in a cache, but never removed from the cache when the object is no longer in use. If you, as a developer, do not have full control over an object life cycle, you should probably use a weak reference-based approach. As has previously been discussed, the java.util.WeakHashMap class is ideal for caches.
Be aware that weak references is not a one-size-fits-all answer to getting rid of memory leaks in caches. Sometimes, developers misuse weak collections, for instance, by putting values in a WeakHashMap that indirectly reference their keys.
In application containers, such as a J2EE server, where multiple classloaders are used, special care must be taken so that classes are not dependency injected into some framework and then forgotten about. The symptom would typically show up as every re-deployment of the application leaking memory.
Detecting a Java memory leak
It is all too common to find out about a memory leak by the JVM stopping due to an OutOfMemoryError. Before releasing a Java-based product, it should generally be tested for memory leaks. The standard use cases should be run for some duration, and the live set should be measured to see that no memory is leaking. In a good test setup, this is automated and tests are performed at regular intervals.
We got overconfident and failed to heed our own advice in JRockit Mission Control 4.0.0. Normally, we use the Memory Leak Detector to check that editors are reclaimed properly in JRockit Mission Control during end testing. This testing was previously done by the developers themselves, and had failed to find its way into the formal test specifications. As a consequence, we would leak an editor each time a console or a Memleak editor was opened. The problem was resolved, of course, using the Memory Leak Detector.
A memory leak in Java can typically be detected by using the Management Console to look at the live set attribute. It is important to know that a live set increase over a shorter period of time does not necessarily have to be indicative of a memory leak. It could be the case that the load of the Java application has changed, that the application is serving more users than before, or any other reason that may trigger the need to use more memory. However, if the trend is consistent, there is very likely a problem that should be investigated.
There are primarily two different ways of doing detailed heap analysis:
- Online heap analysis, using the JRockit Memory Leak Detector
- Offl ine heap analysis from a heap dump
For online analysis, trend analysis data is collected by piggybacking on the garbage collector. This is virtually without overhead since the mark phase of a GC already needs to traverse all live objects on the heap. The resulting heap graph is all the data we need to do a proper trend analysis for object allocation.
The heap dump format used by JRockit is the same as produced by the Java Virtual Machine Tool Interface (JVMTI) based heap profiler HPROF, included with the JDK. Consequently, the dumps produced by JRockit can be analyzed in all tools supporting the HPROF format.
Memleak technology
The JRockit Mission Control Memory Leak Detector, or Memleak for short, is a dynamic tool that can be attached to a running JRockit instance. Memleak can be used to track how heap memory usage in the Java runtime changes over time for each type (class) in the system. It can also find out which types have instances pointing to a certain other type, or to find out which instances are referring a certain other instance. Allocation tracing can be enabled to track allocations of a certain type of object. This all sounds complicated, but it is actually quite easy to use and supported by a rich graphical user interface. Before we show how to use it to resolve memory leaks, we need to discuss some of the architectural consequences of how Memleak is designed.
- Trend analysis is very cheap: Data is collected as part of the normal garbage collection mark phase. As mentioned, this is a surprisingly fast operation. When the tool is running, every normal garbage collection will collect the necessary data. In order to ensure timely data collection, the tool will also, by default, trigger a garbage collection every ten seconds if no normal garbage collection has taken place. To make the tool even less intrusive, this setting can be changed in the preferences.
- Regardless of client hardware, you will be able to do the analysis: Connecting to a server with a multi-gigabyte heap from a puny laptop is not a problem.
- Events and changes to the heap can be observed as they happen: This is both a strength and a weakness. It is very powerful to be able to interact with the application whilst observing it, for example to see which operation is responsible for certain behavior, or to introspect some object at the same time as performing operations on it. It also means that objects can become eligible for garbage collection as they are being studied. Then further operations involving the instances are impossible.
- No off-line analysis is possible: This can be a problem if you want to get a second opinion on a memory leak from someone who can’t be readily given access to your production system. Fortunately, the R28 version of JRockit can do standard HPROF heap dumps that can be analyzed in other tools, such as Eclipse MAT, if required.
Note that HPROF dumps contain the contents of the heap. If the system from which the HPROF dump was generated contains sensitive data, that data will be readily accessible by anyone getting access to the dump. Be careful when sharing dumps.
Tracking down the leak
Finding the cause of memory leaks can be very tricky, and tracking down complex leaks frequently involves using several tools in conjunction. The application is somehow keeping references to objects that should no longer be in use. What’s worse, the place in the code where the leaked instance was allocated does not necessarily have to be co-located with the place in the code pertaining to the leak. We need to analyze the heap to find out what is going on.
To start Memleak, simply select the JVM to connect to in the JVM Browser and choose Memleak from the context menu.
Only one Memleak instance can be connected to any given JVM at a time.
In Memleak, the trend table can help detect even slow leaks. It does this by building a histogram by type (class), and by collecting data points about the number of instances of every type over time. A least squares approximation on the sizes over time is then calculated, and the corresponding growth rate in bytes per second is displayed.
In JRockit Mission Control 4.1.0, this algorithm will be a little bit more sophisticated, as it will also incorporate the correlation to the size of the live set over time. The types that have the highest tendency to grow as the live set is growing are more likely to be the ones causing a leak.
The trend table can usually be helpful in finding good candidates for memory leaks. In the trend table, classes with a high growth rate are colored red.higher color intensity means higher growth rate. We can also see how many instances of the class there are, and how much memory they occupy.
In the program being analyzed in the following example, it would seem that char arrays are leaking. Not only are they colored deep red and at the top of the trend analysis table, signifying a suspected memory leak, but they also have the one of the highest growth rates of any type in the system.
It would also seem, to a lesser extent, that classes related to the types Leak$DemoObject and Hashtable are leaking.
In total, we seem to be leaking about 7.5 KB per second.
(6.57*1,024+512+307+71+53+11)/1,024 . 7.5
The JVM was started with a maximum heap size of 256 MB, and the used live set was about 20 MB (the current size of the live set was checked with the Management Console).
(256 . 20) *1,024 / 7.5 . 32,222 seconds . 537 minutes . 22 hours
If left unchecked, this memory leak would, in about 22 hours, result in an OutOfMemoryError that would take down the JVM and the application it is running.
This gives us plenty of time to find out who is holding on to references to the suspected leaking objects. To find out what is pointing to leaking char arrays, right click on the type in the trend table and click on Add to Type Graph, as shown in the following screenshot:
This will add the selected class to the Type Graph tab and automatically switch to that tab. The tab is not a type graph in the sense of an inheritance hierarchy, but rather a graph showing how instances of classes point to other classes. The Type Graph will appear with the selected class, as shown in the following screenshot:
Clicking on the little plus sign (
) to the left of the class name will help us find out what other types are referring to this type. We call this expanding the node. Every click will expand another five classes, starting with the ones that leak the most memory first.
In the Type Graph, just like in the trend table, types that are growing over time will be colored red.the redder, the higher the leak rate.
As we, in this example, want to find out what is ultimately holding on to references to the character arrays, we expand the char[] node.
Expanding the char[] node reveals that there is only one other type (or rather instances of that type) that also seem to be leaking and have references to char arrays.the inner class DemoObject of the conspicuously named Leak class. Expanding the Leak$DemoObject node until we don’t seem to be finding any more leaking types reveals that the application seems to be abusing some sort of Hashtable, as shown in the next screenshot:
The next step would be to find the particular instance of Hashtable that is being misused. This can be done in different ways. In this example, it would seem that the leaking of the char arrays is due to the leaking of the Leak$DemoObjects. We would therefore like to start by listing the Hashtable$Entry instances that point to Leak$DemoObject.
Classes declared inside other classes in Java, for example the Entry class in Hashtable, have the naming format uterClass$InnerClass in the bytecode, and this is the way they show up in our profiling tools.in our example, Hashtable$Entry and Leak$DemoObject. This is because when inner (nested) classes were introduced in the Java language, Sun Microsystems didn’t want to change the JVM specification as well. To list instances that are part of a particular relationship, simply right click on the relation and select List Referring Instances, as shown in the following screenshot:
This brings up the instances view, to the left of the Memleak editor, where the instances pointing from Hashtable entries to demo objets are listed. An instance can be added to the instance graph by right clicking on the instance, and selecting Add to Instance Graph from the context menu. This will bring up a graph similar to the Type Graph, but this time showing the reference relationships between instances.
Once the Instance Graph is up, we need to find out what is keeping the instance alive. In other words, who is referring the instance, keeping it from being garbage collected? In previous versions of Memleak, this was sometimes a daunting task, especially when searching in large object hierarchies. As of JRockit Mission Control 4.0.0, there is a menu alternative for letting JRockit automatically look for the path back to the root referrer. Simply right click on the instance and click on Expand to Root, as shown in the next screenshot. This will expand the graph all the way back to the root.
As shown in the following screenshot, expanding to root for our example reveals that there is a thread named Thread-2 that holds on to an instance of the inner class DemoThread of the class Leak. In the DemoThread instance, there is a field named table that refers to a Hashtable containing our leaked DemoObject.
When running in Eclipse, it is possible to view the code that manipulates the table field, by selecting View Type Source from the context menu on the Leak$DemoThread nodes. In this example, we’d find a programming error:
for (int i = 0; i <= 100; i++) { put(total + i); } for (int i = 0; i < 100; i++) { remove(total + i); }
As an equals sign is missing from the second loop header, more objects are placed in the Hashtable than are removed from it. If we make sure that we call remove as many times as we call put, the memory leak would go away.
The complete examples for this chapter can be found in the code bundle that comes with this book.
To summarize, the text book recipe for hunting down memory leaks is:
- Find one of the leaking instances.
- Find a path to the root referrer from the leaking instance.
- Eliminate whatever is causing the reference to be kept alive.
- If there still is a leak, start over from 1.
Of course, finding an instance that is unnecessarily kept alive can be quite tricky. One way to home in on unwanted instances is to only look at instances participating in a certain reference relationship. In the previous example, we chose to look at char arrays that were only being pointed to by DemoObjects. Also, the most interesting relationships to look for are usually found where leaking types and non-leaking types meet. In the Type Graph for the example, we can see that once we expand beyond the Hastable$Entry array, object growth rates are quite neutral. Thus, the leak is quite likely due to someone misusing a Hashtable.
It is common for collection types to be misused, thereby causing memory leaks. Many collections are implemented using arrays. If not dealt with, the memory leak will typically cause these arrays to grow larger and larger. Therefore, another way of quickly homing in on the offending instance is to list the largest arrays in the system. In the example, we can easily find the Hashtable holding on to the DemoObjects by running the leaking application for a while. Use the List Largest Arrays operation on the array of Hashtable entries, as shown in the next screenshot.
If all else fails, statistics will be on your side the longer you wait, as more and more heap space will be occupied by the leaking objects.
Both of the largest Hashtable$Entry arrays are leaking. Adding any one of them to the Instance Graph and expanding it to the root referrer will yield the same result, implicating the instance field table in the Leak$DemoThread class. This is illustrated in the following screenshot:
A look at classloader-related information
In our next example, there are actually three different classloaders running almost the same code.two with the memory leak and one that actually behaves well. This is to illustrate how things can look in an application server, where different versions of the same application can be running. In Memleak, just like with the other tools in JRockit Mission Control, the tables can be configured to show more information. To see classloader-related information in the table, edit the Table Settings as shown in the following screenshot:
Memleak will, by default, aggregate classes with the same name in the same row. To make Memleak differentiate between classes loaded by different classloaders, click on the Individually show each loaded class (
) button.
In the next screenshot, the trend table is shown for all classes with names containing the string Demo. As can be seen, there are three classloaders involved, but only two of them are leaking instances of Leak$DemoObject.
The option of splitting the classes per classloader is also available in the Type Graph. The Type Graph can be configured to use a separate node for each loaded class, when expanding a node. Simply click on the Use a separate node for each loaded class icon (
) in the Type Graph. Following is a screenshot showing the first expansion of the char[] node when using separate nodes for each class. The bracket after the class name contains the classloader ID.
It is possible to switch back to aggregating the nodes again by clicking on the Combine classes with same class name button (
). Note that the setting will not change the state of the currently visible nodes. Only nodes that are expanded after changing the setting are affected.
Interactive memory leak hunting
Another way of using the Memleak tool is to validate a hypothesis about memory management in an application. Such a hypothesis could for example be “when I remove all contacts from my contact list, no Contact objects should be left in the system”. Because of the interactive nature of the Memleak tool, this is a very powerful way of finding leaks, especially in an interactive application. A huge amount of such scenarios can be tested without interruptions caused by, for example, dumping heaps to files. If done well and with enough systems knowledge, finding the leaks can be a very quick business.
For example, consider a simple address book application. The application is a self-contained Swing application implemented in a single class named AddressBook. The class contains a few inner classes, of which one is the representation of a ontact.AddressBook$Contact. In the application, we can add and remove contacts in the address book. One hypothesis we may want to test is that we do not leak contacts.
The Memleak tool normally only shows types that occupy more than 0.1 percent of the heap, or the amount of data in the general case would be overwhelming. We are normally not interested in types not heavily involved in leaks, and as time passes, the interesting ones tend to occupy quite a lot of the heap anyway. However, most leaks usually only occupy a tiny fraction of the heap until the leaking application has run for quite some time. In order to detect memory leaks earlier, this setting can be changed to 0 so that all types are shown, regardless of their used heap space. This can be done in the preferences, as shown in the following screenshot:
We then filter out the classes related to the hypothesis that we want to test and watch how they behave while we run the application.
Remember from Chapter 7, The Management Console, that the filter boxes in JRockit Mission Control can use regular expressions by entering the prefix regexp.
In the following screenshot, three addresses have been removed from the AddressBook, but the number of Contact instances remain at the original eight:
Removing all of them will still leave all eight of the original AddressBook$Contact instances in the system. There is indeed a memory leak.
To get the Memleak tool to react faster to the changes on the heap, the trend refresh interval (shown in the preference screenshot earlier) can be lowered.
Now, as all the remaining instances are unintentionally retained, drilling down into any of them will be sufficient for tracking down the leak. Simply click on List all instances from the context menu in the trend table and then add any of the instances to the Instance Graph. The path to root referrer in the example reveals that the contacts are retained in some sort of index map named numberToContact. The developer of the application should be familiar with this structure and know where to look for it in the code. If we ensure that we remove the Contact objects from the index map as well as from the contact list, the leak will go away.
The recipe for interactively testing for memory leaks is:
- Formulate a hypothesis, such as “When I close my Eclipse PHP Editor, I
expect the editor instance and the instances associated with it to go away”. - Filter out the classes of interest in the trend table.
- See how they are freed and allocated as the hypothesis is tested.
- If a memory leak is found, it is usually quite easy to find a leaking instance and locate the problem by tracing the path to the root referrer.
The general purpose heap analyzer
Yet another way to use the Memleak tool is as a general purpose heap analyzer. The Types panel shows relationships between the types (classes) on the Java heap. It can also list the specific instances in such a relationship. In the next example, we’ve found a peculiar cycle in our Type Graph. We can see that there are instances of Hashtable entries that are actually pointing back to their Hashtable. To list just the instances of Hashtable$Entry pointing to Hashtable, we simply right click on the number in the reference relation (see the following screenshot), and select List referring instances.
We have now, with a few clicks, been able to list all the Hashtable instances in the system that contain Hashtables. It is also easy to determine exactly where they are located in the system. Simply select an instance, add it to the Instance Graph and trace the shortest path back to the root referrer. Doing this for the first instance will reveal that it is located in the com.sun.jmx.mbeanserver.RepositorySupport. Ofcourse, having Hashtables that contain Hashtables is not a crime; this merely serves as an example of the versatility of the Memleak tool.
You need a 1.5-based JDK to see the Hashtables containing Hashtables for this example. In a 1.6-based JDK, the design has changed.
Any instance can be inspected in Memleak. Next, we inspect the instance of com. sun.jmx.mbeanserver.RepositorySupport to verify that it indeed contains Hashtable instances.
Allocation traces
The last major feature in Memleak to be discussed in this book, is the ability to turn on allocation tracing for any given type. To, for instance, find out where the Leak$DemoObjects are being allocated in our previous example, simply right click on the type and then click on Trace Allocations. The example has been tailored to do allocations in the vicinity of the code that causes the actual leak (note that this is normally not the case).
As can be readily seen from the screenshot, we are invoking put more often than remove. If we are running Memleak from inside Eclipse, we can jump directly to the corresponding line in the Leak class by right clicking on the stack frame and then clicking on Open Method from the context menu.
Allocation traces can only be enabled for one type (class) at a time.
A word of caution: Enabling allocation traces for types with a high allocation pressure can introduce significant overhead. For example, it is, in general, a very bad idea to enable allocation traces for java.lang.Strings.
Troubleshooting Memleak
If you have trouble connecting to your JVM with Memleak, it is probably due to Memleak requiring an extra port. Communication using Memleak, unlike other tools in the JRockit Mission Control suite, is only initiated over JMX. Memleak requires the internal MemLeak Server (MLS) to be running in the JVM.
When starting Memleak, a run request is sent over JMX. The MLS will then be started and a communication port is returned. The client stops communicating over JMX after startup and instead uses the proprietary Memory Leak Protocol (MLP) over the
communication port.
The MLS was built as a native server in JRockit, as the original idea was to be able to run the MLS when running out of Java heap, similar to the way that a heap dump can be triggered when running out of memory. We wanted to introduce a fl ag that would suspend the JVM on OutOfMemoryErrors and then launch the MLS. This was unfortunately never implemented.
It is possible to specify which port to use for MLS in the initial request over JMX. This can be set in the preferences, as shown in the following screenshot:
- There can only be one client connected to MLS at any given time
- When a client disconnects, the MLS will automatically shut down
Summary
In this chapter, we have shown how to use the JRockit Mission Control Memory Leak Detector to detect and find the root cause for Java memory leaks. We have also discussed the advantages and disadvantages of the Memory Leak Detector in various use cases.
It has been demonstrated how the Memory Leak Detector can be used to detect even quite slow memory leaks. We have also shown how the Memory Leak Detector can be used in an interactive manner to quickly test particular operations in an application that may be prone to memory leaks.
We have explained how the Memory Leak Detector can also be used as an interactive general purpose heap analyzer to both find relationships between different types on the heap, as well as for inspecting the contents of any instance on the heap.
Finally, we showed how to troubleshoot the most common problems associated with using the tool.