There's numerous solutions you may want to take a look at:
Torque - This is a variation of the original PBS (Portable Batch Scheduler) code base. They call it a resource manager because technically it doesn't take care of scheduling jobs although it does include several schedulers. However, it will take care of managing and allocating your compute node CPU, memory, file, and other consumable resources. If you have anything more than very basic scheduling needs, you'll probably want to supplement it with the Maui Cluster Scheduler. I know most about this one because it's what we use. It can be a bit rough around the edges because it's mostly community developed, and most of the developers are sysadmins and not software engineers. There's a commercial product that spawned from the same PBS code base called PBS Professional which seems more mature and is available for a relatively modest fee.
Sun Grid Engine - Similar to the PBS based systems, but written by Sun. The resource manager and scheduler are more integrated in this system and it offers a few different modes of operation and resource allocation. Despite being a Sun product, it apparently runs well on Linux and other operating systems, not just solaris.
Platform LSF - Is another popular commercial offering in the same space.
Condor - Another batch scheduling system more suited for high throughput, tons of short jobs.
SLURM - Is another open source offering. It's not quite as mature as the PBS based products, but it has a nicer architecture that's plugin based and is easy to install if you go with the CAOS NSA Linux distribution and the Perceus cluster manager. See this Linux Magazine article for an example of how easy it is to get up and running.
Which one of these you pick is largely a matter of preference and matching it up with your requirements. I would say that Torque and SGE have a slight bend to multi-user clusters in a scientific computing environment. Based on what I've seen of Altair's PBS Professional, it looks like it is far more suitable for a commercial environment and has a better suite of tools for developing product specific workflows. Same goes for LSF.
SLURM and Condor are probably the easiest to get up and running, and if your requirements are relatively modest, they may be the best fit. However, if you have needs for more complicated scheduling policies and many users submitting jobs to your systems, they may be lacking in that regard without being supplemented by an external scheduler.
I wouldn't worry about the load on the "crontab program" (cron) itself; it's your overall system load you might want to pay attention to. Look at metrics (cpu utilization, io rates, web query response times) during the time your job(s) are running - is there a noticeable spike? is it bad enough that it's disrupting actual use of the system?
If the programs "don't take long", that's a good sign that it's not a problem.
If you're still concerned, you can do other things to limit the load: run the jobs with nice
to reduce their priority, run them sequentially instead of simultaneously, and so forth.
Best Answer
Check out mcollective. Their introductory screencast is actually quite good.