Linux – How to prevent Linux from freezing when out of memory

linuxoom

Today I (accidentally) ran some program on my Linux box that quickly used a lot of memory. My system froze, became unresponsive and thus I was unable to kill the offender.

How can I prevent this in the future? Can't it at least keep a responsive core or something running?

Best Answer

I'll bet that the system didn't actually "freeze" (in the sense that the kernel hung), but rather was just very unresponsive. Chances are it was just swapping very hard, causing interactive performance and system throughput to drop like a stone.

You could turn off swap, but that just changes the problem from poor performance to OOM-killed processes (and all the fun that causes), along with decreased performance due to less available disk cache.

Alternately, you could use per-process resource limits (commonly referred to as rlimit and/or ulimit) to remove the possibility of a single process taking a ridiculous amount of memory and causing swapping, but that just pushes you into entertaining territory with processes that die at inconvenient moments because they wanted a little more memory than the system was willing to give them.

If you knew you were going to do something that was likely to cause massive memory usage, you could probably write a wrapper program that did an mlockall() and then exec'd your shell; that'd keep it in memory, and would be the closest thing to "keep a responsive core" you're likely to get (because it's not that the CPU is being overutilised that is the problem).

Personally, I subscribe to the "don't do stupid things" method of resource control. If you've got root, you can do all sorts of damage to a system, and so doing anything that you don't know the likely results of is a risky business.