Performance Tuning - Other JVM Options
2.5 Other JVM Options Please be aware first and foremost that as the JVM updates come out, the behavior of the options we will discuss seem to change quite a bit. In one release, I see a particular option improve performance substantially. Then in an update release that same option now degrades performance. Do not take this as concrete recommendations so much as options to be aware of and to try out individually in your environment. Now the options: -XX:+CompressedOops -XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+EliminateLocks -XX:+CompressedOops (Compressed Ordinary Object Pointers). If you read the section comparing 32-bit and 64-bit JVM's, you will a brief description of what this option does. With JDK 1.6.0_20 and above (maybe even before this update), compressed oops is on by default. Therefore it is not something you have to specify anymore. The reason we bring it up here is that if you are on a lower revision of the JDK, you can and should specify this option for heap sizes 32GB and lower. It doesn't work for heap sizes larger than 32GB. One word of caution - we have seen the JVM core dump with a segmentation fault when using some earlier revisions of the JVM when using compressed oops with large page memory. This is no longer the case, but there were bugs with this combination. If you are on a lower JDK revision level, either upgrade or check out the bug listings to see if your version if effected by this problem. Next up is aggressive optimizations, -XX:+AggressiveOpts. This option turns on additional Hotspot JVM optimizations that have not been made into defaults (This is why this options behavior changes between releases - new optimizations may be added, and old ones may become defaults). If you use this option, you will need to retest your application with and without it anytime you may upgrade the JVM. Next up is the three locking based options: -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+EliminateLocks. In theory, these options should be best used all together. They should be able to work in tandem to eliminate locking overhead altogether (under certain circumstances), but in practice its just not there yet. We have had success at times with both escape analysis and biased locking in isolation, not together. In one test, we saw a 40% reduction in response times when using escape analysis. In another test, we saw significant throughput improvement of some 15+% when using biased locking. Your results will definitely vary but these options are definitely worth a try. The option to eliminate locks we have never had any success with. But again the purpose of these options are to eliminate locking overhead which should improve concurrency. On today's multi-core hardware, this should improve throughput. These options, if they stay around, should be watched carefully.