开发者

Best way to generate million tcp connection

I need to find a best way to generate a million开发者_运维问答 tcp connections. (More is good,less is bad). As quickly as possible machinely :D

Why do I need this ? I am testing a nat, and I want to load it with as many entries as possible.

My current method is to generate a subnet on a dummy eth and serially connect from that dummy to actual eth to lan to nat to host.

subnetnicfake----routeToRealEth----RealEth---cable---lan----nat---host.   
|<-------------on my machine-------------------->|


One million simultaneous TCP sessions might be difficult: If you rely on standard connect(2) sockets API to create the functions, you're going to use a lot of physical memory: each session will require a struct inet_sock, which includes a struct sock, which includes a struct sock_common.

I quickly guessed at sizes: struct sock_common requires at roughly 58 bytes. struct sock requires roughly 278 bytes. struct inet_sock requires roughly 70 bytes.

That's 387 megabytes of data before you have receive and send buffers. (See tcp_mem, tcp_rmem, tcp_wmem in tcp(7) for some information.)

If you choose to go this route, I'd suggest setting the per-socket memory controls as low as they go. I wouldn't be surprised if 4096 is the lowest you can set it. (SK_MEM_QUANTUM is PAGE_SIZE, stored into sysctl_tcp_rmem[0] and sysctl_tcp_wmem[0].)

That's another eight gigabytes of memory -- four for receive buffers, four for send buffers.

And that's in addition to what the system requires for your programs to open one million file descriptors. (See /proc/sys/fs/file-max in proc(5).)

All of this memory is not swappable -- the kernel pins its memory -- so you're really only approaching this problem on a 64-bit machine with at least eight gigabytes of memory. Probably 10-12 would do better.

One approach taken by the Paketto Keiretsu tools is to open a raw connection, perform all the TCP three-way handshakes using a single raw socket, and try to compute whatever is needed, rather than store it, to handle much larger amounts of data than usual. Try to store as little as possible for each connection, and don't use naive lists or trees of structures.

The Paketto Keiretsu tools were last updated around 2003, so they still might not scale into the million range well, but they would definitely be my starting point if this were my problem to solve.


After searching for many days, I found the problem. Apparently this problem is well thought over, and it should be ,since its so very basic. The problem was, I didnt know what this problem should be called . Among know-ers, it apparently called as c10k problem. What I wanted is c1m problem. However there seems to be some effort done to get C500k . or Concurrent 500k connections.

http://www.kegel.com/c10k.html AND http://urbanairship.com/blog/2010/09/29/linux-kernel-tuning-for-c500k/

@deadalnix.

Read above links ,and enlighten yourself.


Have you tried using tcpreplay? You could prepare - or capture - one or more PCAP network capture files with the traffic that you need, and have one or more instances of tcpreplay replay them to stress-test your firewall/NAT.


as long as you have 65536 port available in TCP, this is impossible to achive unless you have an army of servers to connect to.

So, then, what is the best way ? Just open as many connection as you can on servers and see what happens.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜