Linux – Understanding Java memory behavior in Docker

dockergoogle-kubernetes-enginejavalinuxmemory

Our current Kubernetes cluster running in Google Container Engine (GKE) consists of n1-standard-1 machine types (1 virtual CPU and 3.75 GB of memory) running Debian GNU/Linux 7.9 (wheezy) which is maintained by Google. Due to increased load and memory usage of our services we need to upgrade our nodes to a larger machine type. While trying this out in our test cluster we've experienced something that (to us) seems quite strange.

The memory that is consumed by the JVM application when deployed to a Google node seems to be proportional to the number of cores available on the node. Even if we set the JVM max memory (Xmx) to 128Mb it consumes about 250Mb on a 1 core machine (this is understandable since the JVM consumes more memory than the max limit due to GC, the JVM itself etc), but it consumes about 700Mb on a 2 core machine (n1-standard-2) and about 1.4Gb on a 4 core machine (n1-standard-4). The only thing that is different is the machine type, the very same Docker image and configurations are used.

For example if I SSH into a machine using a n1-standard-4 machine type and run sudo docker stats <container_name> I get this:

CONTAINER CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             BLOCK I/O
k8s.name  3.84%               1.11 GB / 1.611 GB   68.91%              0 B / 0 B 

When I run the same Docker image with the exact same (application) configuration locally (mac osx and docker-machine) I see:

CONTAINER CPU %               MEM USAGE / LIMIT    MEM %               NET I/O               BLOCK I/O
name      1.49%               236.6 MB / 1.044 GB  22.66%              25.86 kB / 14.84 kB   0 B / 0 B         

Which is much more in tune with what I would expect due the Xmx setting (for the record I have 8 cores and 16Gb of memory). The same thing is confirmed when I run top -p <pid> on the GKE instance which gives me a RES/RSS memory allocation of 1.1 to 1.4 Gb.

The Docker image is defined like this:

FROM java:8u91-jre
EXPOSE 8080
EXPOSE 8081

ENV JAVA_TOOL_OPTIONS -Dfile.encoding=UTF-8

# Add jar
ADD uberjar.jar /data/uberjar.jar

CMD java -jar /data/uberjar.jar -Xmx128m -server

I've also tried adding:

ENV MALLOC_ARENA_MAX 4

which I've seen recommended in several threads such as this but it doesn't seem to make any difference what so ever. I've also tried changing to another Java base image as well as using alpine linux but this doesn't seem to change things either.

My local Docker version is 1.11.1 and the Docker version in Kubernetes/GKE is 1.9.1. The Kubernetes version (if it matters) is v1.2.4.

What's also interesting is that if we deploy multiple instances of the pod/container to the same machine/node some instances will allocate much less memory. For example the first three might allocate 1.1-1.4Gb of memory but then the 10 succeeding containers only allocate about 250 Mb each which is approximately what I would expect every instance to allocate. The problem is that if we reach the memory limitation of the machine, the first three instances (those allocating 1.1Gb memory) never seem to release their allocated memory. If they were to release memory when the machine is under increased pressure I wouldn't worry about this but since they retain the memory allocation even when the machine is loaded it becomes an issue (since it prohibits other containers to be scheduled on this machine and thus resources are wasted).

Questions:

  1. What could be causing this behavior? Is it a JVM issue? Docker issue? VM issue? Linux issue? Configuration issue? Or maybe a combination?
  2. What can I try to do to limit the memory allocation of the JVM in this case?

Best Answer

When you specify

CMD java -jar /data/uberjar.jar -Xmx128m -server

then the values that you intend to be JVM arguments (-Xmx128m -server) are passed as command line arguments to the Main class in the .jar file. They're available as the args in your public static void main(String... args) method.

Note that this is also true if you're running a main class by name, rather than specifying an executable jar file.

To pass them as JVM arguments, rather than arguments to your program, you need to specify them before the -jar arg.

See https://stackoverflow.com/questions/5536476/passing-arguments-to-jar-which-is-required-by-java-interpreter