Python Multiprocessing with Queue vs ZeroMQ IPC

multiprocessingpython

I am busy writing a Python application using ZeroMQ and implementing a variation of the Majordomo pattern as described in the ZGuide.

I have a broker as an intermediary between a set of workers and clients. I want to do some extensive logging for every request that comes in, but I do not want the broker to waste time doing that. The broker should pass that logging request to something else.

I have thought of two ways :-

  1. Create workers that are only for logging and use the ZeroMQ IPC transport
  2. Use Multiprocessing with a Queue

I am not sure which one is better or faster for that matter. The first option does allow me to use the current worker base classes that I already use for normal workers, but the second option seems quicker to implement.

I would like some advice or comments on the above or possibly a different solution.

Best Answer

I like the approach of using standard tools like what Jonathan proposed. You didn't mention which OS you are doing the work on, but another alternative that follows that same spirit could be to use Python's standard logging module together with logging.handlers.SysLogHandler and send the logging messages to rsyslog service (available on any linux/unix, but I think there's also a windows option, but I've never used that one).

Essentially that whole system implements same thing that you are thinking off. Your local process queues up log message to be handled/processed/written by someone else. In this case, the someone else (rsyslog) is a well-known, proven service that has a lot of built-in functionality and flexibility.

Another advantage of this approach is that your product will integrate that much better with other sysadmin tools that are built on top of syslog. And it wouldn't even require you to write any code to get that option.

Related Topic