开发者

Network I/O serialized

Why is network I/O seriali开发者_C百科zed and not parallelized?


Well, the actual packets kind of are (they could all take different routes, for example), but at some point you're going to want a stream where you read the data out in the same order you put it in - that being a key point of TCP. How else would you do this?

You could always use separate sockets to give additional parallelism? Or have I misunderstood your meaning?

Some network protocols do offer "broadcast", but this is not always available (for example many network devices such will deliberately be configured to block UDP broadcasts)


Apples and oranges.

Serializing is when you take some structured data and flattens it into a single sequence of data that can easily be transmitted and then deserialized on the other end to recreate the original structure.

Parallelizing is when you divide a task in several sub-tasks that can be run simultaneously, and then combine their results to get the same thing as if the task was run by a single process.

So, parallelizing can not replace serializing as they are used for different purposes.


Because soldering cable connectors is more expensive than adding more processor power (or to add more complicated chips for greater line speed). Compare types of cables typically used for communication over years:

Centronics parallel cable - 36 pins.

RS232 cable 25 pins, then 9 pins

Ethernet twisted pair - two pairs (4 pin)

USB cable - one pair + power.

Moreover, it is not easy to transfer several channels in paralel over wireless or long distance.


Think it as a stream of data. The data can be chunked and sent/received in an unordered way. To reconstruct the original stream the chunks have to be reordered.


First, my present AMD machine is running Ubuntu Linux with 6CPU-core, and "ps -ef" gave:

ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Apr18 ?        00:00:01 /sbin/init
root         2     0  0 Apr18 ?        00:00:00 [kthreadd]
root         3     2  0 Apr18 ?        00:00:00 [migration/0]
root         4     2  0 Apr18 ?        00:00:00 [ksoftirqd/0]
root         5     2  0 Apr18 ?        00:00:00 [watchdog/0]
root         6     2  0 Apr18 ?        00:00:00 [migration/1]
root         7     2  0 Apr18 ?        00:00:00 [ksoftirqd/1]
root         8     2  0 Apr18 ?        00:00:00 [watchdog/1]
root         9     2  0 Apr18 ?        00:00:00 [migration/2]
root        10     2  0 Apr18 ?        00:00:00 [ksoftirqd/2]
root        11     2  0 Apr18 ?        00:00:00 [watchdog/2]
root        12     2  0 Apr18 ?        00:00:00 [migration/3]
root        13     2  0 Apr18 ?        00:00:00 [ksoftirqd/3]
root        14     2  0 Apr18 ?        00:00:00 [watchdog/3]
root        15     2  0 Apr18 ?        00:00:00 [migration/4]
root        16     2  0 Apr18 ?        00:00:00 [ksoftirqd/4]
root        17     2  0 Apr18 ?        00:00:00 [watchdog/4]
root        18     2  0 Apr18 ?        00:00:00 [migration/5]
root        19     2  0 Apr18 ?        00:00:00 [ksoftirqd/5]
root        20     2  0 Apr18 ?        00:00:00 [watchdog/5]
root        21     2  0 Apr18 ?        00:00:00 [events/0]
root        22     2  0 Apr18 ?        00:00:00 [events/1]
root        23     2  0 Apr18 ?        00:00:00 [events/2]
root        24     2  0 Apr18 ?        00:00:00 [events/3]
root        25     2  0 Apr18 ?        00:00:00 [events/4]
root        26     2  0 Apr18 ?        00:00:00 [events/5]
root        27     2  0 Apr18 ?        00:00:00 [cpuset]
root        28     2  0 Apr18 ?        00:00:00 [khelper]
root        29     2  0 Apr18 ?        00:00:00 [async/mgr]
root        30     2  0 Apr18 ?        00:00:00 [sync_supers]
root        31     2  0 Apr18 ?        00:00:00 [bdi-default]
root        32     2  0 Apr18 ?        00:00:00 [kintegrityd/0]
root        33     2  0 Apr18 ?        00:00:00 [kintegrityd/1]
root        34     2  0 Apr18 ?        00:00:00 [kintegrityd/2]
root        35     2  0 Apr18 ?        00:00:00 [kintegrityd/3]
root        36     2  0 Apr18 ?        00:00:00 [kintegrityd/4]
root        37     2  0 Apr18 ?        00:00:00 [kintegrityd/5]
root        38     2  0 Apr18 ?        00:00:00 [kblockd/0]
root        39     2  0 Apr18 ?        00:00:00 [kblockd/1]
root        40     2  0 Apr18 ?        00:00:00 [kblockd/2]
root        41     2  0 Apr18 ?        00:00:00 [kblockd/3]
root        42     2  0 Apr18 ?        00:00:00 [kblockd/4]
root        43     2  0 Apr18 ?        00:00:00 [kblockd/5]
root        44     2  0 Apr18 ?        00:00:00 [kacpid]
root        45     2  0 Apr18 ?        00:00:00 [kacpi_notify]
root        46     2  0 Apr18 ?        00:00:00 [kacpi_hotplug]
root        47     2  0 Apr18 ?        00:00:00 [ata/0]
root        48     2  0 Apr18 ?        00:00:00 [ata/1]
root        49     2  0 Apr18 ?        00:00:00 [ata/2]
root        50     2  0 Apr18 ?        00:00:00 [ata/3]
root        51     2  0 Apr18 ?        00:00:00 [ata/4]
root        52     2  0 Apr18 ?        00:00:00 [ata/5]

From above, u can see that a lot of the kernel processes are per-cpu-core - including the ksoftirqd. Look into linux kernel documentation, networking drivers are using the ksoftirqd to implement the sending out of data. So this is parallelization at the CPU core level.

At the network card, there is multiple "channel" - especially for highspeed networking cards. And all these can process receiving and transmission of data at the same time - again parallelization at the network card level. Eg:

http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=230&idcategory=0

(look for "multi-channel").

But when it reach the ethernet wire, since all of them are sharing the same wire.....serialization at the wire level is necessary. But then the bandwidth of wire is usually MUCH higher than the processing rate of CPU or ethernet card.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜