Web Services: more frequent "small" calls, or less frequent "big" calls
In general, is it better to have a web application make lots of calls to a web service getting smaller chunks of data back, or to have the web app make fewer calls and get larger chunks of data?
In particular, I'm building a Silverlight app that needs to get large amounts of data back from the server in response to a query created by a user. E开发者_如何学Cach query could return anywhere from a few hundred records to a few thousand. Each record has around thirty fields of mostly decimal-type data. I've run into the situation before where the payload size of the response exceeded the maximum allowed by the service.
I'm wondering whether it's better (more efficient for the server/client/web service) to cut this payload vertically--getting all values for a single field with each call--or horizontally--getting batches of complete records with each call. Or does it matter?
I have found, surprisingly, that hauling down two or even four times as much data as you need is almost always faster to doubling the number of calls in order to provide better filters.
I found that I can transfer 200K in the same time that I could issue one new request. Most fine-tuned requests wanted about 3K while course ones maybe wanted 50-100K.
If you exceed payload size, bump payload size or build a streaming CGI on the server. It's better that way.
It definitely matters. You don't want the web service to be chatty - for small requests, you may well find the latency ends up being longer than the time taken to process the request. On the other hand, obviously you don't want to return data you don't need. There's a balance to be found, which usually involves the client being able to specify what they want from the server in a reasonably detailed fashion - or to be able to effectively send a "batch" of requests in one single web request.
Getting a smaller dataset will mean the user will have to wait for less time; which is good.
But getting small datasets means he will have to wait more often.
One possibility would be to use a combination of both approaches, for example:
- For the first request, only get a small dataset, to be able to display some informations faster
- and, for the next requests, get bigger datasets, to make fewer HTTP requests
- that way while those (a bit longer) requests take place, the user will already have something to read.
I would say do something like paging (show X records at a time). This would result in "small" calls as the user would only request data as they need it. It really depends on your situation. Do they need to view through all the data at once or would only seeing small chunks of it at a time be okay?
Just remember: small chunks = less waiting for some data.
精彩评论