Linux IPC – Handling Huge Transactions in Linux

clinuxmultithreading

I'm building and application that requires huge transactions/sec of data and I need to use IPC to for the mutithreaded mutliprocceses communication, I know that there are a lot of methods to be used but not sure which one to choose for this application.

This is what the application is gonna have,
4 processes, each process has 4 threads, the data chunk that needs to be transferred between two or more threads is around 400KB. I found that fifo is good choice except that it's 64K which is not that big so i'll need to modify and recompile the kernel but not sure if this is the right thing to do?
Anyway, I'm open to any suggestions and I'd like to squeeze your experience in this 🙂 and I appreciate it in advance.

Best Answer

I maninly program for Windows but I think that the solutions are the same for Linux. I would use one of the following:

  • Named Pipes: Fast, have build in client server paradigm, work nice with multiple threads, eg. multiple readers / single writter etc...
  • Shared memory: Probably the fastest method of IPC, but you need to do your own synchronization, resource locking etc...
  • Sockets: Same as Pipes, little bit slower, but can communicate over machines or internet.

I would go for Named Pipes. I use them a lot and they are fast and reliable. Also if you need to move so much data per second (don't know exactly how much) them maybe you need to rethink your approach. Extremes often show bad design choices.

EDIT:

For names pipes here is a basic example.