C++ – Which is better for local IPC, POSIX message queues (mqueues) or Unix domain (local) sockets

cipcposixsockets

Is it better to use POSIX message queues or Unix domain sockets for local IPC communication?

I have worked with Unix sockets between machines (not domain) and I remember that making and breaking the connection would cause sockets to linger awhile before they finally went away. Moreover, if you wanted a "reliable" exchange you either had to use TCP or design the application to return an ACK. I'm not certain if this also applies to Unix domain sockets though.

In my current project we need local IPC. My first reaction was to use POSIX MQueues, since I've used them before for local messaging. However, a co-worker is suggesting Unix domain sockets instead.

Is one better than the other, or is it a matter of programming familiarity? Or perhaps its depends upon the application being created?

At a big picture the application we are working on follows a client/server model. The clients send messages to the server to "do something". However, the client doesn't wait for an "its done" response — although they do want to know if their request has been received or not.

The basic logic for the send side is:

connect to server
send request
note if the send worked or not
disconnect from server

There can be hundreds of clients to the one server.

We're are executing on an SMP system (4-8 cores) running the Linux OS.

Thanks in advance.

Best Answer

UNIX domain sockets do not have to "linger" in a TIME_WAIT-like status, since that wait time is used in case there are stray packets from the connection still wandering around the Internet. The concern doesn't apply locally.

UNIX domain sockets can be either SOCK_STREAM (like TCP) or SOCK_DGRAM (like UDP), with the added guarantee that UNIX domain datagram sockets are reliable and don't re-order datagrams.

You will still need some kind of ACK (you do even with TCP) if you want to be certain that your other application has read the message you sent; after all, even if the send() succeeded it may have crashed before it had a chance to process the message. (This applies to message queues too - to be totally sure that a message won't be lost, the recieving application must write the request to a journal, flush that to disk and then send back an acknowledgement).

I agree that the choice is essentially a matter of programming familiarity.