Put the Brakes on Docker Containers

Deploying new containers has become ridiculously easy (In a good way). Once containers are running, it is even easier to scale. Doubling or even tripling the number of running containers can occur with a simple keystroke but can your container infrastructure handle it?

Much like cars driving on a freeway. All the cars have different shapes, sizes, and performance. Cars are limited to the capacity of the road. Freeways are static in size and cannot shrink or grow based on demand (yet). Adding more vehicles to a freeway will eventually lead to a bottleneck. That is why we have laws in place to help the flow of traffic like no stopping, speed limits, etc.

Typically, the container infrastructure is not as dynamic as containers. Stretching the compute capacity of the container infrastructure can happen quite quickly. According to the Sysdig annual container survey, the industry is averaging ten containers per host with some installations having up to 95 containers on a host. Wow!

Even if the number of containers is remaining static, it is quite common to have some troublemakers in the mix as well. For example, some applications over time start using a bit more memory and CPU. Usually, this is OK but what if all the containers across your entire infrastructure start using more resources?

Limit your containers

Referring back to the freeway analogy freeways’s have speed limits and rules in place to help maintain the flow of traffic. We can do the same with containers by setting resource limits on the containers. It is recommended to set limits for CPU and Memory. Setting CPU and Memory limits, we ensure that the container is bound to the values which we configure. If no limits are set, then containers have free run on your host resources and can consume as much as they can grab. By default, containers have unlimited access to host memory and CPU. This is dangerous as the host could eventually run out of resources and start killing off processes, containers, and finally, the Docker daemon itself if containers run out of control.

Limit Docker Container Memory

Let’s have a look at how we can configure memory for a container. We will cover the 2 most important memory settings for a container which is memory limit and memory reservation. Docker offers more memory configurations() as well

The memory limit is set with the -m or --memory= flag and is the maximum amount of memory assigned to a single container. By setting this flag, we ensure that the container will not use more than the configured value.

The memory reservation is set with the --memory-reservation flag and considered a soft limit. A soft limit is a value which is used more as a baseline. This value must always be smaller than the -m or --memoryflag. Memory reservation is the assigned amount of memory your container operates. Docker tries to maintain the memory allocation with bursting memory up to the hard memory limit.

docker run -m 1G — memory-reservation 750M nginx

In the above example, we have set a hard memory limit of 1GB and reserved 750 Megabyte. NGINX can use the 750MB and burst up to the hard limit of 1GB if needed. Note, the reservation is always smaller than the limit.

Limit Docker Container CPU

Just like memory, the CPU is a traffic jam waiting to happen. That’s why it is essential to set some boundaries on containers to help control our highway. The Docker CPU limits offer a lot of configurations, but we will focus on CPU limit and reservation just like with memory.

The --cpus=<value>flag sets the hard limit for the number of CPU shares available across all CPU’s. For example, if the host machine has 2 CPU’s then setting –cpus=“1” will guarantee at most one CPU.

We can also set a CPU reservation but now only when running Docker services or in a compose file. Otherwise, Docker recommends sticking to the --cpus= flag as the main flag to set.

Memory and CPU Limits in a Docker Compose File

How would resource limits look inside a docker-compose file? I have been running a big monitoring stack for quite some time now and have adjusted the cadvisor to the below values to help deal with the load of my servers. You can see in a compose file we set the resource limits in the deploy → resources section of the file.

Here is an excerpt from my Prometheus Stack which I limit the cAdvisor:

image: google/cadvisor
— prom_net
restart: always
memory: 128M
memory: 64M

What we now know about Docker limits

Before configuring limits on your containers, it is wise that you test and understand how much your containers need. Don’t push limits to production without testing. Docker limits can be set everywhere from Docker run, compose, and service/swarm which is limit CPU/memory and reserve CPU/memory.

Where to start:

  • Start with small workloads which you can easily measure
  • Monitor your changes and adjust accordingly
  • Rinse and Repeat for other containers/services
  • Find out more about 56K.Cloud

    We love Cloud, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!


    Brian Christner

    Brian Christner hails from Arizona but now resides in the Alps of Switzerland. Brian is a nominated member of the Docker Captain’s program and specializes in converting coffee into containers.

    You may also like