Limiting process memory/CPU usage on linux
I know we can adjust scheduling priority by using the nice
command.
However, th开发者_Python百科e man page doesn't say whether it will limit both CPU and memory or only CPU (and in any case, it can't be used to specify absolute limits).
Is there a way to run a process and limit its memory usage to say "X" MB and CPU usage to say "Y" Mhz in Linux?
You might want to investigate cgroups as well as the older (obsolete) ulimit.
Linux specific answer:
For historical systems running ulimit -m $LIMIT_IN_KB
would have been the correct answer. Nowadays you have to use cgroups
and cgexec
or systemd-run
.
However, for systems that are still transitioning to systemd
there does not seem to be any solution that does not require setting up pre-made configuration for each limit you wish to use. This is because such systems (e.g. Debian/Ubuntu) still use "hybrid hierachy cgroups" and systemd supports setting memory limits with newer "unified hierarchy cgroups" only. If your Linux distribution is already running systemd
with unified hierarchy cgroups then running a given user mode process with specific limits should work like this
systemd-run --user --pipe -p MemoryMax=42M -p CPUWeight=10 [command-to-run ...]
or
systemd-run --user --scope -p MemoryMax=42M -p CPUWeight=10 [command-to-run ...]
For possible parameters, see man systemd.resource-control
.
If I've understood correctly, setting CPUWeight
instructs the kernel how much CPU you want to give if CPU is fully tasked and defaults to 100
, lower values mean less CPU time if multiple processes compete for CPU time. It will not limit CPU usage if CPU is utilized less than 100% which is usually a good thing. If you truly want to force the process to use less than a single core even when the machine is idle, you can set e.g. CPUQuota=10%
to force process to use up to 10% of single core. If you set CPUQuota=200%
it means the process can use up to 2 cores on average (but without CPU binding it can use some of its time on more CPUs).
Additional information:
- https://wiki.debian.org/LXC/CGroupV2
- https://github.com/systemd/systemd/issues/6895
- https://github.com/systemd/systemd/issues/3388
- https://www.grant.pizza/blog/understanding-cgroups/
- https://unix.stackexchange.com/questions/44985/limit-memory-usage-for-a-single-linux-process
Update (year 2021): It seems that
systemd-run --user --pty -p MemoryMax=42M -p CPUWeight=10 ...
should work if you're running version of systemd
including a fix to bug https://github.com/systemd/systemd/issues/9512 – in practice, you need fixes listed here: https://github.com/systemd/systemd/pull/10894
If your system is lacking those fixes, the command
systemd-run --user --pty -p MemoryMax=42M -p CPUWeight=10 ...
appears to work but the memory limits are not actually enforced.
In reality, it seems that Ubuntu 20.04 LTS does not contain required fixes. Following should fail:
$ systemd-run --user --pty -p MemoryMax=42M -p MemorySwapMax=50M -p CPUWeight=10 stress -t 10 --vm-keep --vm-bytes 10m -m 20
because the command stress
is expected to require slightly more than 200 MB of RAM but memory limits are set lower. According to bug https://github.com/systemd/systemd/issues/10581 Poettering says that this should work if distro is using cgroupsv2
, whatever it means in practice.
I don't know a distro that implements user mode cgroup
limits correctly. As a result, you need root
to configure the limits.
In theory, you should be able to use ulimit for this purpose. However, I've personally never gotten it to work.
精彩评论