On-demand streaming of data thorugh a webservice
I have an assignment of exposing a service which will deliver potent开发者_运维知识库ially very large amounts of data (gigabytes). Thus it will have to stream data on demand, so that data is not buffered in memory. The data will undergo the following steps before being sent to the client.
- Extract data from database
- Serialize data to XML
- Compress the XML data with gzip
- Send data to the client as a stream
Step 3 might be left out as compression can be done by WCF. Is there a recommended way to do this, without buffering large amounts of data in any of the steps, which will obviously crash the application with data being maybe 100GB?
Since this is an assignment I am not sure what constraints you have or what the basic purpose of the exercise is, but optimizing a data transfer service like this, and making it stable, is not trivial. The chance of a communication problem occurring is substantial, so you will need to handle the possibility this. But you don't just want to start over if there's a problem, since that would waste all the work you've done up to the point of the problem.
At a basic level the service should break the data into manageable pieces (say, 100K, depending on the network speed, stability, environment). The size of the chunk is meant to be a balance of the likelihood of errors versus the overhead of requesting each chunk. If the likelihood of errors is high, chunks should be smaller.
This also addresses buffering huge amounts of data in memory, but the need for a robust error handling mechanism is equally important. The service should therefore have a method to initiate a request, which would respond to the client with information about the total size of the data stream, and the number of chunks, and another to request a specific chunk of data.
The client could optionally specify the chunk size, or the protocol could be designed to automatically adjust the chunk size in response to error conditions. That is, the chunk size should generally be reduced if errors are occurring frequently.
Either way, once initiating the request, the client calls another method which requests specific chunks sequentially, and when each is successfully received, it appends them to the file at its end. If a failure occurred, the client can re-request just a specific chunk.
Finally, sending huge amounts of data in XML format is probably very inefficient, unless there is a very large amount of data compared to markup. That is, if the data structure has many elements (fields, records) compared to the volume of information contained by each element (e.g., lots of simple numeric data), it would make a lot more sense to establish a contract for the data format when it's initially requested. If, on the other hand, there are few fields that each contain large amounts of data (e.g., text) then it doesn't matter much.
If the data format is always the same (which is typical) then the client can just be designed to expect that. If not, the server could begin the exchange by providing a structure for the data it's going to transmit, and then transmit data in the established structure without the overhead of markup tags.
For a very efficient, structured data encoder check out protocol buffers. The basic point (whether you use something like protocol buffers, or just lay out the data in your own standardized format) is that markup tags can add a lot of overhead, and they are entirely unnecessary if the client and the server have a contract for the format of the data that's being sent, and that you should break the data into manageable pieces which are requested specifically by the client.
精彩评论