Linux – bash script returns “out of memory” in cron, but not in shell

bashcentoscronlinuxrsync

I'm running a nightly bash script to sync a remote folder (source) with a local folder (target). I've tested this script, based on rsync, and it works fine in a root shell. It takes time since there are hundred of gigs to copy but it works.

Once I use it in crontab my server runs out of memory.

My server has 8GB of RAM, 4GB of swap, and as I said, the script never goes OOM when manually ran from a shell. It's a default Centos 5.5 installation. I could split the load and sync all the 2nd level dirs in a find/for script, but I'd like to keep it simple and only sync the top level directories.

I cannot make too many tests since this server is used to host websites and other services and I cannot afford to hang it just for testing purpose. Do you know a setting that could allow cron to finish this job normally ?

#!/bin/bash

BACKUP_PATH="/root/scripts/backup"

rsync -av --delete /net/hostname/source/ /export/target/ >     $BACKUP_PATH/backup_results_ok 2>  $BACKUP_PATH/backup_results_error

edit: cron configuration is default, as /etc/security/limits.conf, which is all commented out.

Best Answer

Even though limits.conf is commented out, I suggest that you test just to make sure. One way would be to create a cronjob that just contains something like "ulimit -a | Mail -s 'limits' me@example.com" to have the info emailed to you. Once you know what the limits are, then you can just reset them in the shell script that actually runs the rsync:

#!/bin/bash
ulimit -d unlimited
ulimit -m unlimited
ulimit -s unlimited
rsync [...]
Related Topic