Python Architecture – Designing Applications with Many Small Scripts

Architecturemessage-queuepython

I am building an application which, at the moment, consists of many small Python scripts.

Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script, and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues.

What I like about this approach is that it is very loosely coupled.

However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling its inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application?

Specific questions:
How should I run each of these scripts – as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this?

Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue?
What is the current preferred approach to daemonising Python scripts?

Broadly I would welcome any comments or opinions on any aspect of this.

thanks

Best Answer

If this application needs to stay running for a long time then build in resilience but kill it at random times (the monitor too - but less frequently), and have a monitor restart it. That way you are constantly exercising the monitor and restart capability of the system so when some true system failure shafts your system it will be more likely to recover smoothly.