开发者

Transferring large data in a data migration

We have a set of applications that transfer data from 开发者_运维技巧one system to another within the same network. They uses WCF C#, and do something like transfer 1000 objects at a time on a continuous basis. (after receiving data, receiver application notifies sender to send more data)

I am wanting to optimize the process by transferring more data in a given time, and am thinking which of the following 2 options is better:

  1. Increase transfer quantity significantly. The bigger the better, make it like 50000 objects. This will reduce the time spent per data in moving from user process space to network card, by doing it in bulk.

  2. Make transfer quantity to be just less than 1460, which is our network path MTU. eg., if an object is 100 bytes, transfer 12 objects leaving some allowance for http and soap headers. this will avoid reassembling of TCP segments and speedup the receipt. (it will avoid excessive RAM usage problems)

Can you please tell which option is better, or if I can try some other idea to speedup the transfer?


I don't think your problem is the speed of transferring to the network card.

Please tell us more about this data migration -- Is it coming from a flat file? -- Are you changing the data at all? -- What type of data do you need at the other other end?

I'm guessing -- but I expect you must be changing the data. If you just need to move data there are many fast ways to move a big file (eg ftp or some other standard).

If you are changing the data it is probably that process that is slowing you down -- run a profiler on your program and optimize it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜