Removing resource limits on Solaris 10

solaris-10ulimit

How should one remove all potential artificial resource limitations for a process?

I just saw a case where a server application consumed resources so that some limitation was hit. The other shells into the same server etc were all extremely slow (waiting for something to free up for them; ie. prstat starting 5 minutes). It wasn't CPU/memory related problem so I think it has got something to do with ulimits / projects.

Already managed to set the maximum open files to 500 000 and it helped a little bit. However there is something else and I can not figure out what resource is maxed out. I can get some in-house administrator probably to check this but I would like to understand how I could make sure there shouldn't be any limitations!

If you think I am going the wrong way (would be better to figure out what limitation should be specfically tuned etc) please feel free to point me to the correct way. I know technical stuff – it's just Solaris 10 that is giving me headache :/

Best Answer

Solaris 10 uses "projects" if you want to manually manage resources on a per-user or per-group basis.

/etc/project will list the existing setup, which usually only limits the initial shared memory to 8GB for the "default" project. Nothing else gets limited by default. All users that are not root or system users are subject to the "default" project by, uh, default. id -p will show what project a user belongs to.

If you suspect ulimits are at play, ulimit -a will list the current user restrictions which you can then change.

The symptoms of the slowdown you describe do however make it sound like a physical limit as opposed to a configured limit. Solaris 10 is pretty sharp at managing itself these days.

Related Topic