Linux – Running batch jobs and getting a peak memory usage summary for each job

linuxmemory

I want to execute a batch of jobs and afterwards have a summary telling me what the peak memory requirement for each job was. Executing the jobs under a profiler such as valgrind is not acceptable because it would slow down the jobs. For a running job I could read the value of VmPeak under /proc/JOBPID/status, where JOBPID is the PID of a job's process. But I need to get the job's all-time maximum memory requirement, therefore I would need to get the value of VmPeak when the job's process is just about to be finished, otherwise I would just get the peak memory usage up to the moment I read VmPeak, which could increase after I read it. So unless there is a way to read the value of VmPeak of a process that has finished, this approach doesn't seem useful. Any other ideas on how to get the maximum amount of memory that had been allocated to a process from the moment it started up to the moment it finished?

Best Answer

The VmPeak answer looks like a good one to me... You could either append vmpeak to a file every x seconds and then find the highest value. Or, every x seconds, run something like: VmPeak = curVmPeak if (curVmPeak > VmPeak).