XML Data – Methodology Behind Fetching Large XML Data Sets in Pieces

delphidevelopment-methodologieshttpweb servicesxml

I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule – therefore, the requests are specific to the data behind the system.

When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it.

I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page.

So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

Best Answer

Decide on the maximum number of pages your user is expected to browse in 1 session. Do a client fetch that gets set of primary keys that satisfy your maximum criteria and return this set to your client. This process is performed only 1 time. Each time the user requests next or previous page, use the cashed set of keys to get the desired rows based on the page size. This method always retrieves at most n rows where n is the number of rows in your page (after the initial cash retrieval). When the user is done, flush the keys cash. This method is specially useful when you have a complex query where a simple SQL such as "SELECT * FROM ... Where Key > lastKey" won't work. The drawbacks of this approach are:

1 - This method ignores new and removed records after the user has requested initial browse request, however, this is usually acceptable in many types of LOB applications.

2 - This method requires fetching the keys in advance, however, if your max. number of pages is reasonable, this should not be a problem, specially when the query is well-qualified.