Linux – How do ulimit settings impact Linux

linuxulimit

Lately, I had a EAGAIN error with some async code that made me take a closer look at ulimit settings. While I clearly understand certain limits, such as nofile, others are still quite confused to me.

It's quite easy to find resources on how to set those, but I couldn't find any article explaining precisely what each setting is about and how that could impact the system.

Definition taken from /etc/security/limits.conf is not really self-explanatory:

- core - limits the core file size (KB)
- data - max data size (KB)
- fsize - maximum filesize (KB)
- memlock - max locked-in-memory address space (KB)
- nofile - max number of open files
- rss - max resident set size (KB)
- stack - max stack size (KB)
- cpu - max CPU time (MIN)
- nproc - max number of processes
- as - address space limit (KB)
- maxlogins - max number of logins for this user
- maxsyslogins - max number of logins on the system
- priority - the priority to run user process with
- locks - max number of file locks the user can hold
- sigpending - max number of pending signals
- msgqueue - max memory used by POSIX message queues (bytes)
- nice - max nice priority allowed to raise to values: [-20, 19]
- rtprio - max realtime priority
- chroot - change root to directory (Debian-specific)

So I'd be glad if someone could enlighten me on those rather important Linux settings!

The error I face is actually:

{ [Error: spawn mediainfo EAGAIN]
  code: 'EAGAIN',
  errno: 'EAGAIN',
  syscall: 'spawn mediainfo',
  path: 'mediainfo',
  spawnargs: 
   [ '--Output=XML',
     '/home/buzut/testMedia' ],
  cmd: 'mediainfo --Output=XML /home/buzut/testMedia' }

As per the definition on gnu.org:

An operation that would block was attempted on an object that has non-blocking mode selected. Trying the same operation again will block until some external condition makes it possible to read, write, or connect (whatever the operation).

I understand that EAGAIN error refers to a resource that is temporarily not available. It wouldn't be wise to set all parameters to unlimited. Thus I would understand the implication of which params to identify the one blocking and adjust – ulimit settings, my code or both – accordingly.

Here are my current limits:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127698
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 64000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 127698
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Best Answer

I have made my homework and (almost) found what each option does. Also, I've noted that there is more options in /etc/security/limits.conf than it appears with ulimit -a. Therefore, I've only documented the latter here. Of course, everyone is invited to enrich this answer!


  • data seg size (kbytes, -d)

    The maximum size of a process's data segment. a data segment is a portion of an object file or the corresponding virtual address space of a program that contains initialized static variables.

    https://en.wikipedia.org/wiki/Data_segment



  • file size (blocks, -f)

    The maximum size of files written by the shell and its children.












  • virtual memory (kbytes, -v)

    The maximum amount of virtual memory available to the shell. Virtual memory maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.

    https://en.wikipedia.org/wiki/Virtual_memory