<100 subscribers

cgroups v2 (control groups version 2) is a powerful Linux kernel mechanism that allows you to isolate and limit system resources such as CPU, memory, and disk I/O for groups of processes.
Using cgroups v2, it is possible to achieve stable and predictable application behavior — especially in containerized environments like Docker and Kubernetes.
In this article, we take a deep dive into how cgroups v2 works and how it can be used to control resources at the process level, enabling more efficient and isolated workloads.
cgroups v2 is the second generation of the Linux control group mechanism, originally introduced in 2007.
Compared to version 1, cgroups v2 provides a cleaner architecture, a unified hierarchy, and more consistent resource management semantics.
The v2 model improves how CPU, memory, disk, and other resources are controlled by exposing a simpler and more expressive kernel API.
cgroups v2 is especially important in containerized environments, where each container effectively runs as its own group of processes. By isolating resources at this level, the system can ensure that workloads do not interfere with one another.
cgroups v2 is organized as a directory hierarchy mounted at:
/sys/fs/cgroup
Each directory represents a group of processes, and resource limits are applied by writing values to specific files within that directory.
Resource management is handled by controllers, each responsible for a specific type of resource:
CPU — controls processor time allocation
Memory — controls RAM usage
IO — controls disk read/write throughput
Pids — limits the number of processes in a group
Once a process is placed inside a cgroup, the kernel automatically enforces the limits defined by these controllers.
Each cgroup is represented as a directory inside /sys/fs/cgroup.
Example: creating a new cgroup and enabling controllers:
cd /sys/fs/cgroup
mkdir my_group
echo "+cpu +memory +io +pids" > cgroup.subtree_control
cd my_group
echo $$ > cgroup.procs
In this example, the current shell process is added to my_group, and all child processes will inherit the group’s limits.
CPU usage in cgroups v2 is controlled primarily via two parameters:
cpu.max — defines a hard CPU quota
cpu.weight — defines relative CPU priority
Example: limit a group to 50% of a single CPU core:
echo "50000 100000" > /sys/fs/cgroup/my_group/cpu.max
This configuration means the group can use 50 ms of CPU time per 100 ms period.
To fully block CPU usage (for very low-priority background work):
echo "max" > /sys/fs/cgroup/my_group/cpu.max
The memory controller limits the maximum amount of RAM a group can consume.
Example: set a hard limit of 1 GB:
echo $((1 * 1024 * 1024 * 1024)) > /sys/fs/cgroup/my_group/memory.max
Memory usage and events can be observed via:
cat /sys/fs/cgroup/my_group/memory.current
cat /sys/fs/cgroup/my_group/memory.events
If a group exceeds its memory limit, memory reclaim is attempted. If that fails, OOM is triggered inside the cgroup, protecting the rest of the system.
The io controller allows fine-grained control over disk throughput.
Example: limit read bandwidth on /dev/sda to 10 MB/s:
echo "8:0 rbps=10485760" > /sys/fs/cgroup/my_group/io.max
Here:
8:0 is the device’s major:minor identifier
rbps limits read bandwidth in bytes per second
wbps can be used to limit write bandwidth
To list devices and their identifiers:
lsblk --output NAME,MAJ:MIN
The pids controller prevents excessive process creation, protecting system stability.
Example: limit a group to 100 processes:
echo 100 > /sys/fs/cgroup/my_group/pids.max
This is especially useful for preventing fork bombs or runaway workloads.
Once limits are applied, monitoring behavior is critical.
Useful files include:
cat /sys/fs/cgroup/my_group/cpu.stat
cat /sys/fs/cgroup/my_group/memory.current
cat /sys/fs/cgroup/my_group/io.stat
These provide insight into how resources are being consumed and whether limits need adjustment.
cgroups v2 is a foundational Linux mechanism that provides precise and reliable resource control at the kernel level.
By limiting CPU time, memory usage, disk I/O, and process counts, cgroups v2 enables predictable performance and strong isolation — which is essential for modern multi-tenant and containerized systems.
In this article, we covered core concepts and practical configuration examples. However, cgroups v2 offers even deeper customization possibilities that become crucial in advanced production environments.
Future articles will explore real-world optimization strategies and tuning patterns using cgroups v2.
SysOpsMaster // Aleksandr M.
No comments yet