Preface

This book is the result of an amazing series of events.

In high school, back in the pre-Internet era, the authors used to hang out at the same bulletin board systems and found each other in a particularly geeky thread about math problems. Bulletin board friendship led to friendship in real life, as well as several collaborative software projects. Eventually, both authors went on to study at the Royal Institute of Technology (KTH) in Stockholm.

More friends were made at KTH, and a course in database systems in our third year brought enough people with a similar mindset together to achieve critical mass. The decision was made to form a consulting business named Appeal Software Solutions (the acronym A.S.S. seemed like a perfectly valid choice at the time). Several of us started to work alongside our studies and a certain percentage of our earnings was put away so that the business could be bootstrapped into a full-time occupation when everyone was out of university. Our long-term goal was always to work with product development, not consulting. However, at the time we did not know what the products would turn out to be.

In 1997, Joakim Dahlstedt, Fredrik Stridsman and Mattias Joƫlson won a trip to one of the first JavaOne conferences by out-coding everyone in a Sun sponsored competition for university students. For fun, they did it again the next year with the same result.

It all started when our three heroes noticed that between the two JavaOne conferences in 1997 and 1998, the presentation of Sun's adaptive virtual machine HotSpot remained virtually unchanged. HotSpot, it seemed at the time, was the answer to the Java performance problem. Java back then was mostly an interpreted language and several static compilers for Java were on the market, producing code that ran faster than bytecode, but that usually violated the language semantics in some fundamental way. As this book will stress again and again, the potential power of an adaptive runtime approach exceeds, by far, that of any ahead-of-time solution, but is harder to achieve.

Since there were no news about HotSpot in 1998, youthful hubris caused us to ask ourselves "How hard can it be? Let's make a better adaptive VM, and faster!" We had the right academic backgrounds and thought we knew in which direction to go. Even though it definitely was more of a challenge than we expected, we would still like to remind the reader that in 1998, Java on the server side was only just beginning to take off, J2EE hardly existed and no one had ever heard of a JSP. The problem domain was indeed a lot smaller in 1998.

The original plan was to have a proof of concept implementation of our own JVM finished in a year, while running the consulting business at the same time to finance the JVM development. The JVM was originally christened "RockIT", being both rock 'n' roll, rock solid and IT. A leading "J" was later added for trademark reasons.

Naturally, after a few false starts, we needed to bring in venture capital. Explaining how to capitalize on an adaptive runtime (that the competitors gave away their own free versions of) provided quite a challenge. Not just because this was 1998, and investors had trouble understanding any venture not ultimately designed to either (1) send text messages with advertisements to cell phones or (2) start up a web-based mail order company.

Eventually, venture capital was secured and in early 2000, the first prototype of JRockit 1.0 went public. JRockit 1.0, besides being, as someone on the Internet put it "very 1.0", made some headlines by being extremely fast at things like multi-threaded server applications. Further venture capital was acquired using this as leverage. The consulting business was broken out into a separate corporation and Appeal Software Solutions was renamed Appeal Virtual Machines. Sales people were hired and we started negotiations with Sun for a Java license.

Thus, JRockit started taking up more and more of our time. In 2001, the remaining engineers working in the consulting business, which had also grown, were all finally absorbed into the full-time JVM project and the consulting company was mothballed. At this time we realized that we both knew exactly how to take JRockit to the next level and that our burn rate was too high. Management started looking for a suitor in the form of a larger company to marry.

In February 2002, BEA Systems acquired Appeal Virtual Machines, letting nervous venture capitalists sleep at night, and finally securing us the resources that we needed for a proper research and development lab. A good-sized server hall for testing was built, requiring reinforced floors and more electricity than was available in our building. For quite a while, there was a huge cable from a junction box on the street outside coming in through the server room window. After some time, we outgrew that lab as well and had to rent another site to host some of our servers.

As part of the BEA platform, JRockit matured considerably. The first two years at BEA, plenty of the value-adds and key differentiators between JRockit and other Java solutions were invented, for example the framework that was later to become JRockit Mission Control. Several press releases, world-beating benchmark scores, and a virtualization platform quickly followed. With JRockit, BEA turned into one of the "big three" JVM vendors on the market, along with Sun and IBM, and a customer base of thousands of users developed. A celebration was in order when JRockit started generating revenue, first from the tools suite and later from the unparalleled GC performance provided by the JRockit Real Time product.

In 2008, BEA was acquired by Oracle, which caused some initial concerns, but JRockit and the JRockit team ended up getting a lot of attention and appreciation.

For many years now, JRockit has been running mission-critical applications all over the world. We are proud to have been part of the making of a piece of software with that kind of market penetration and importance. We are equally proud to have gone from a pre-alpha designed by six guys in a cramped office in the Old Town of Stockholm to a world-class product with a world-class product organization.

The contents of this book stems from more than a decade of our experience with adaptive runtimes in general, and with JRockit in particular. Plenty of the information in this book has, to our knowledge, never been published anywhere before.

We hope you will find it both useful and educational!

What this book covers

Chapter 1, Getting Started, This chapter introduces the JRockit JVM and JRockit Mission Control. Explains how to obtain the software and what the support matrix is for different platforms. We point out things to watch out for when migrating between JVMs from different vendors, and explain the versioning scheme for JRockit and JRockit Mission control. We also give pointers to resources where further information and assistance can be found.

Chapter 2, Adaptive Code Generation, Code generation in an adaptive runtime is introduced. We explain why adaptive code generation is both harder to do in a JVM than in a static environment as well as why it is potentially much more powerful. The concept of "gambling" for performance is introduced. We examine the JRockit code generation and optimization pipeline and walk through it with an example. Adaptive and classic code optimizations are discussed. Finally, we introduce various flags and directive files that can be used to control code generation in JRockit.

Chapter 3, Adaptive Memory Management, Memory management in an adaptive runtime is introduced. We explain how a garbage collector works, both by looking at the concept of automatic memory management as well as at specific algorithms. Object allocation in a JVM is covered in some detail, as well as the meta-info needed for a garbage collector to do its work. The latter part of the chapter is dedicated to the most important Java APIs for controlling memory management. We also introduce the JRockit Real Time product, which can produce deterministic latencies in a Java application. Finally, flags for controlling the JRockit JVM memory management system are introduced.

Chapter 4, Threads and Synchronization, Threads and synchronization are very important building blocks in Java and a JVM. We explain how these concepts work in the Java language and how they are implemented in the JVM. We talk about the need for a Java Memory Model and the intrinsic complexity it brings. Adaptive optimization based on runtime feedback is done here as well as in all other areas of the JVM. A few important anti-patterns such as double-checked locking are introduced, along with common pitfalls in parallel programming. Finally we discuss how to do lock profiling in JRockit and introduce flags that control the thread system.

Chapter 5, Benchmarking and Tuning, The relevance of benchmarking and the importance of performance goals and metrics is discussed. We explain how to create an appropriate benchmark for a particular problem set. Some industrial benchmarks for Java are introduced. Finally, we discuss in detail how to modify application and JVM behavior based on benchmark feedback. Extensive examples of useful command-line flags for the JRockit JVM are given.

Chapter 6, JRockit Mission Control, The JRockit Mission Control tools suite is introduced. Startup and configuration details for different setups are given. We explain how to run JRockit Mission Control in Eclipse, along with tips on how to configure JRockit to run Eclipse itself. The different tools are introduced and common terminology is established. Various ways to enable JRockit Mission Control to access a remotely running JRockit, together with trouble-shooting tips, are provided.

Chapter 7, The Management Console, This chapter is about the Management Console component in JRockit Mission Control. We introduce the concept of diagnostic commands and online monitoring of a JVM instance. We explain how trigger rules can be set, so that notifications can be given upon certain events. Finally, we show how to extend the Management Console with custom components.

Chapter 8, The Runtime Analyzer, The JRockit Runtime Analyzer (JRA) is introduced. The JRockit Runtime Analyzer is an on-demand profiling framework that produces detailed recordings about the JVM and the application it is running. The recorded profile can later be analyzed offline, using the JRA Mission Control plugin. Recorded data includes profiling of methods and locks, as well as garbage collection information, optimization decisions, object statistics, and latency events. You will learn how to detect some common problems in a JRA recording and how the latency analyzer works.

Chapter 9, The Flight Recorder, The JRockit Flight Recorder has superseded JRA in newer versions of the JRockit Mission Control suite. This chapter explains the features that have been added that facilitate even more verbose runtime recordings. Differences in functionality and GUI are covered.

Chapter 10, The Memory Leak Detector, This chapter introduces the JRockit Memory Leak Detector, the final tool in the JRockit Mission Control tools suite. We explain the concept of a memory leak in a garbage collected language and discuss several use cases for the Memory Leak Detector. Not only can it be used to find unintentional object retention in a Java application, but it also works as a generic heap analyzer. Some of the internal implementation details are given, explaining why this tool also runs with a very low overhead.

Chapter 11, JRCMD, The command-line tool JRCMD is introduced. JRCMD enables a user to interact with all JVMs that are running on a particular machine and to issue them diagnostic commands. The chapter has the form of a reference guide and explains the most important available diagnostic commands. A diagnostic command can be used to examine or modify the state of a running JRockit JVM

Chapter 12, Using the JRockit Management APIs, This chapter explains how to programmatically access some of the functionality in the JRockit JVM. This is the way the JRockit Mission Control suite does it. The APIs JMAPI and JMXMAPI are introduced. While they are not fully officially supported, several insights can be gained about the inner mechanisms of the JVM by understanding how they work. We encourage you to experiment with your own setup.

Chapter 13, JRockit Virtual Edition, We explain virtualization in a modern "cloud-based" environment. We introduce the product JRockit Virtual Edition. Removing the OS layer from a virtualized Java setup is less problematic than one might think. It can also help getting rid of some of the runtime overhead that is typically associated with virtualization. We go on to explain how potentially this can even reduce Java virtualization overhead to levels not possible even on physical hardware.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.54.255