Design Pattern – Handling Large Amounts of Overflowing Data

Architecturedatadesign-patternsfile-storagemessage-queue

Our current queues publish messages that consumed by 3rd party services with rate limits. Currently the messages are retried with exponential back-off. However there could be cases where data is coming in so fast that the retries will never catch up.

Most of the 3rd party services offer alternative batch imports, and the solution I've come up with so far is to write the data to file(s) to be processed out of band.

Are there any design patterns for storing overflowing data?

Best Answer

The main problem as described is that Producers are faster than the consumers. This reminds me a lot of http://ferd.ca/queues-don-t-fix-overload.html . Reactive streams are an initiative that I noticed recently to provide some solution to this kind of setting.

You can have a look at any Queue oriented software products like:

  • Akka (letitcrash.com is their blog with some interesting general posts) or
  • ZeroMQ (Their guide offers some setups applicable to any Queue system)

to see how you can deal with overachieving producers.

However the main question remains how you want to deal with it from your business point of view? Since your consumers (3rd party) are limited in the amount of messages they can handle, your approach to batch import your buffered messages seems reasonable. Aggregating or even dropping messages might also be viable depending on your scenario.

No matter how you want to react to overflow, your queue should be made aware of this fact ( i.e. introducing back-pressure) and then you can apply your own strategy which will depend on your requirements.

In my last project I ended up pulling messages from a consumer through the queue system:

  • The producers were not hogging resources that could be used to work through the existing message pile.
  • The workers processing the messages could fetch new messages when they were ready without the need for scheduled pushing of new messages onto them.
  • I had a guaranteed downtime at specific intervals of the producers that allowed me to catch up since I could not drop any messages.

I hope this provides some handles that allow you to find your own solution.

Related Topic