python & zeroMQ -- Capacity to Handle Messages of Large Size?
I'd like to use python
to build an app that is si开发者_StackOverflowmilar to zeroMQ
's ventilator / sink scheme
Suppose that we have 10 Workers, all running on the same multi-core server.
Let's say that every 2[sec] or so, each Worker pushes to the Sink a message of size 5[MB]. So, the Sink has to handle a total of 50[MB] ( = 10 x 5[MB] ) every 2[sec] or so.
Had the 10 Workers been on different machines, I know that the network could have been a potential bottleneck.
Had the 10 Workers had to write their data to disk (I/O), I know that the disk could have been a potential bottleneck.
Given the fact that all the 10 Workers are on the same machine, what bottlenecks should one expect?
For example, can the same 10 Workers each push a message of size 10[MB] every 2[sec] or so? Can they push a message of size 20[MB] every 2[sec] or so?
What are zmq
's limitations?
What types of bottlenecks should one expect when using python
and zeroMQ
in a Linux
environment?
Using PUSH
/PULL
on the same server I've been been able to max out writing to a raid array @ 400MB/sec (bottle-necked by write speed). There are 10GbE benchmark results here. I'd suggest constructing some simple benchmarks, performance is going to be dependent on a lot of factors like message format, size, etc.
For example a completely trivial benchmark shows zeromq capable of sending 100 10mb messages in 12.3 ms on my machine:
# server
import zmq
context = zmq.Context()
reciever = context.socket(zmq.PULL)
reciever.bind('tcp://127.0.0.1:5555')
while True:
reciever.recv()
# client
import os, zmq
context = zmq.Context()
pusher = context.socket(zmq.PUSH)
pusher.connect('tcp://127.0.0.1:5555')
message = ' ' * 10485760
>>> %timeit pusher.send(message)
100 loops, best of 3: 12.3 ms per loop
精彩评论