Cron – How to Schedule Server Jobs More Intelligently

cronscheduled-task

I run a job every minute to reindex my site's content.

Today, the search engine died, and when I logged in there were hundreds of orphan processes that had been started by cron.

Is there another way using some kind of existing software that will let me execute a job every minute, but that won't launch another instance if that job doesn't return (i.e. because the search engine process has failed)?

Best Answer

The problem isn't really with cron - it's with your job.

You will need to have your job interact with a lock of some description. The easiest way to do this is have it attempt to create a directory and if successful continue, if not exit. When your job has finished and exits it should remove the directory ready for the next run. Here's a script to illustrate.

#!/bin/bash

function cleanup {
    echo "Cleanup"
    rmdir /tmp/myjob.lck
}

mkdir /tmp/myjob.lck ||  exit 1
trap cleanup EXIT
echo 'Job Running'
sleep  60
exit 0

Run this in one terminal then before 60 seconds is up run it in another terminal it will exit with status 1. Once the first process exits you can run it from the second terminal ...

EDIT:

As I just learned about flock I thought I'd update this answer. flock(1) may be easier to use. In this case flock -n would seem appropriate e.g.

* * * * * /usr/bin/flock -n /tmp/myAppLock.lck /path/to/your/job   

Would run your job every minute but would fail if flock could not obtain a lock on the file.