Finding a solution for interleaving multiple data messages into one network stream (in QT framework)

datanetworkingstream-processing

I need to find a solution for high throughput network data interleaving in my QT framework based windows application.

My application is based on server/client architecture. The client connects to a server and both can exchange messages of arbitrary sizes. A message is just a piece of data, with a header describing the message.

The following limitations must be considered:

  • Message size is known in advance on sender site (on receiver site after receiving first message header)
  • There is only one network stream (one opened port)
  • Connection oriented protocol (TCP/IP) if possible
  • Wireless network connection (WLAN) (typically 5-10 MB/s net throughput)
  • Messages should be sent as fast as possible
  • Large messages may not block smaller messages

Because there can only be one data stream (per connection), I decided to use data interleaving so that one large message can not block several smaller messages. For this purpose I allow message fragmentation and introduced a 3 class message priority. The message header contains the total size, payload offset and payload size so that assembling of an interleaved message is possible on receiver site.

Example of large messages:

  • Video stream
  • Firmware update

Examples of small messages:

  • Status messages (workload, battery status, …)
  • Command messages

A naive approach would be to send away all the messages in a row. However this would block smaller high priority messages for certain amount of time because video stream data messages for example utilizes the network connection to the full. This would not be acceptable. Instead of sending a message in one part, I decided to split all messages and interleave them in 8 KB large chunks. Although I have no good idea how I can optimally interleave the individual message fragments into one another, I think there are several sheduling strategies that can be used here. However, this is not my problem.

+----------------------------------+
| Message                          |
+--+-------------------------------+
|  | Header 1                      |
+--+-------------------------------+
|  | Payload (max. 8 KB)           |
+--+-------------------------------+
| ...                              |
+--+-------------------------------+
|  | Header N                      |
+--+-------------------------------+
|  | Payload (max. 8 KB)           |
+--+-------------------------------+

Guess, at a certain point in time, there is a single very large message that has to be sent. There is no other message in output queue! So there is nothing to interleave here. A naive approach would be to send that message away. However guess, shortly after beginning to send this large message, several high priority small messages gets queued. However since the large message was already sent to the socket, these new small messages has to wait until the large message has been sent. This case should be prevented!

I had many different ideas, but none of them was satisfactory.

One idea was to provide only just so much data on the socket that it does not run out of data, but not so much that too many data is in the socket buffer. This would ensure that at any time smaller data can be inserted into the data stream without major delays. The problem is, how can I find out how much data is optimal so that the throughput is still at maximum? How can I get informed that data has been sent and how much data is left in socket buffer?

Did I miss something? Is there even a better solution? I'm open for new/better ideas.

Edit: Some more clarification

The idea to fill priority queues and insert higher priority chunks in front of the low priority chunks was also considered by me. The problem which occurs is as follows:

I have to hold back data chunks in a queue and may not send it to the socket too fast. Then, once in the socket buffer I've no more control over it. This can lead to the situation where there is enough data to send, but the socket runs out of data and is idling. This will decrease the data throughput. How can I determine the optimal fill state of the socket buffer so that the socket doesn't run out of data? The problem may even worser, as far as I know, there is no possibillity to get informed about the state of the buffer, neither if all data has been sent out, nor how much data is left in the socket buffer for sending.

Best Answer

You are splitting the large messages up in chunks if you have multiple messages to send, so that that one large video doesn't interfere with other high-priority messages.
This means that the receiver must always be able to receive large messages in multiple chunks.

As the receiver will be expecting large messages to be split into chunks anyway, why not split all messages into chunks, even if there is nothing else to send? Then the sender can have one (or more) processes filling a (priority-)queue of chunks to be sent and one process that checks if there are chunks in the queue and sends the first one.

If you start a large transfer, the queue will be filled with a large number of low-priority chunks. If halfway through there is a high-priority message (or rather chunk) to send, then that would be placed in the queue in front of the low-priority chunks and get sent out at the next opportunity.
The maximum delay that the high-priority message gets is the time it takes to transfer one full-size chunk.

Related Topic