c1, c2 compiler threads are created by Java virtual machine to optimize your application’s performance. Occasionally these threads will tend to consume high CPU. In this post, let’s learn little more about c1, c2 compiler threads and how to address their high CPU consumption.
After reading this post, terminologies like Hotspot JIT, c1 compiler threads, c2 compiler threads, code cache may not terrify you (as it used to terrify me in the past).
Video: To see the visual walk-through of this post, click below:
What is Hotspot JIT compiler?
Your application may have millions of lines of code. However only a small subset of code gets executed again & again. This small subset of code (also known as ‘Hotspot’) is responsible for your application performance. At runtime JVM uses this JIT (Just in time) compiler to optimize this hotspot code. Most of the time, code written by the application developers is not optimal. Thus, JVM’s JIT compiler optimizes the developer’s code for better performance. To do this optimization, JIT compiler uses C1, C2 compiler threads.
What is Code Cache?
Memory area that the JIT compiler uses for this code compilation is called ‘Code Cache’. This area resides outside of the JVM heap and metaspace. To learn about different JVM memory regions, you may refer to this video clip.
What is the difference between c1 & c2 compiler threads?
During the early days of Java, there were two types of JIT compilers:
Based on what type of JIT compiler you want to use, appropriate JDKs have to be downloaded & installed. Say if you are building a desktop application, then JDK which has a ‘client’ JIT compiler needs to be downloaded. If you are building a server application, then JDK which has a ‘server’ JIT compiler needs to be downloaded.
Client JIT compiler starts compiling the code as soon as the application starts. Server JIT compiler will observe the code execution for quite some time. Based on the execution knowledge it gains, it will start doing the JIT compilation. Even though server JIT compilation is slow, code it produces will be far more superior and performant than the one produced by Client JIT compiler.
Today modern JDKs are shipped with both Client & Server JIT compilers. Both the compilers try to optimize the application code. During the application startup time, code is compiled using client JIT compiler. Later as more knowledge is gained, code is compiled using server JIT compiler. This is called Tiered compilation in JVM.
JDK developers were calling them Client and Server JIT compilers, internally as c1 and c2 compilers. Thus the threads used by client JIT compiler are called c1 compiler threads. Threads used by the server JIT compiler are called c2 compiler threads.
c1, c2 compiler threads default size
Default number of c1, c2 compiler threads are determined based on the number of CPUs that are available on the container/device in which your application is running. Here is the table which summarizes the default number of c1, c2 compiler threads:
|CPUs||c1 threads||c2 threads|
Fig: Default c1, c2 compiler thread count
You can change the compiler thread count by passing ‘-XX:CICompilerCount=N’ JVM argument to your application. One-third of the count you specify in ‘-XX:CICompilerCount’ will be allocated to the c1 compiler threads. Remaining thread count will be allocated to c2 compiler threads. Say suppose you are going to 6 threads (i.e., ‘-XX:CICompilerCount=6’), then 2 threads will be allocated to c1 compiler threads and 4 threads will be allocated to c2 compiler threads.
c1, c2 compiler thread High CPU consumption – potential solutions
Sometimes you might see c1, c2 compiler threads to consume a high amount of CPU. When this type of problem surfaces, below are the potential solution to address it:
1. Do Nothing (if intermittent)
In your case, if your C2 compiler thread’s CPU consumption is only intermittently high and not continuously high, and it doesn’t hurt your application’s performance, then you can consider ignoring the problem.
Pass this ‘-XX:-TieredCompilation’ JVM argument to your application. This argument will disable the JIT hotspot compilation. Thus CPU consumption will go down. However as side-effect your application’s performance can degrade
If CPU spike is caused because of c2 compiler threads alone, you can turn-off c2 compilation alone. You can pass ‘-XX:TieredStopAtLevel=3’. When you pass this ‘-XX:TieredStopAtLevel’ argument with value 3, then only c1 compilation will be enabled and c2 compilation will be disabled.
There are four tiers of compilations:
|1||Simple c1 compiled code|
|2||Limited c1 compiled code|
|3||Full c1 compiled code|
|4||C2 compiled code|
When you say ‘-XX:TieredStopAtLevel=3’ then code will be compiled only upto ‘Full c1 compiled code’ level. C2 compilation will be stopped.
You can pass ‘-XX:+PrintCompilation’ JVM argument to your application. It will print details about your application’s compilation process. It will facilitate you to tune the compilation process further.
Code that the Hotspot JIT compiler compiles/optimizes are stored in the code cache area of the JVM memory. Default size of this code cache area is 240MB. You can increase it by passing ‘-XX:ReservedCodeCacheSize=N’ to your application. Say if you want to make it as 512 MB, you can specify it like this: ‘-XX:ReservedCodeCacheSize=512m’. Increasing the code cache size has a potential to reduce the CPU consumption of the compiler threads.
You can consider increasing the C2 compiler threads by using the argument ‘-XX:CICompilerCount’. You can capture the thread dump and upload it to tools like fastThread, there you can see the number of C2 compiler threads. If you see less number of C2 compiler threads and you have more CPU processors/cores, you can increase the C2 compiler thread count by specifying ‘-XX:CICompilerCount=8’ argument.