开发者

libpcap to capture 10 Gbps NIC

I want to capture packets from 10Gbps network card with 0 packet loss. I am using lipcap for 100Mbps NIC and it is working fine. Will libpcap be able to handle 10Gbps NIC 开发者_Python百科traffic? If not what are the other alternative ways to achive this?


Whether or not libpcap will handle 10Gbps with 0 packet loss is a matter of the machine that you are using and libpcap version. If the machine, CPU and HDD I/O are fast enough, you may get 0 packet loss. Otherwise you may need to perform the following actions:

  • Update your libpcap to the most recent version. Libpcap 1.0.0 or later, supposts zero-copy (memory-mapped) mechanism. It means that there is a buffer that's in both the kernel's address space and the application's address space, so that data doesn't need to be copied from a kernel-mode buffer to a user-mode buffer. Packets are still copied from the skbuff (Linux) into the shared buffer, so it's really more like "one-copy", but that's still one fewer copy, so that could reduce the CPU time required to receive captured packets. Moreover more packets can be fetched from the buffer per application wake up call.

  • If you observe a high CPU utilization, it is probably your CPU that cannot handle the packet arrival rate. You can use xosview (a system load visualization tool) to check your system resources during the capture.

  • If the CPU drops packets, you can use PF_RING. PF_RING is an extension of libpcap with a circular buffer: http://www.ntop.org/products/pf_ring/. It is way faster and can capture with 10Gbps with commodity NICs http://www.ntop.org/products/pf_ring/hardware-packet-filtering/.

  • Another approach is to get a NIC that has an on-board memory and a specific HW design for packet capturing, see http://en.wikipedia.org/wiki/DAG_Technology.

  • If the CPU is not any more your problem, you need to test disk data transfer speed. hdparm is the simplest tool on Linux. Some distros come with a GUI, otherwise: $ sudo hdparm -tT /dev/hda

If you are developing your own application based on libpcap:

  • Use pcap_stats to identify (a) the number of packets dropped because there was no room in the operating system's buffer when they arrived, because packets weren't being read fast enough; (b) number of packets dropped by the network interface or its driver.

  • Libpcap 1.0.0 has an API that lets an application set the buffer size, on platforms where the buffer size can be set. b) If you find it hard to set the buffer, you can use Libpcap 1.1.0 or later in which the default capture buffer size has been increased from 32K to 512K. c) If you are just using tcpdump, use 4.0.0 or later and use the -B flag for the size of the buffer


You don't say which Operating System or CPU. It doesn't matter whether you choose libpcap or not, the underlying network performance is still burdened by the Operating System Memory Management and its network driver. libpcap has kept up with the paces and can handle 10Gbps, but there's more.

If you want the best CPU so that you can do number-crunching, running virtual machines and while capturing packets, go with AMD Opteron CPU which still outperforms Intel Xeon Quadcore 5540 2.53GHz (despite Intel's XIO/DDIO introduction and mostly because of Intel dual-core sharing of same L2 cache). For best ready-made OS, go with latest FreeBSD as-is (which still outperforms Linux 3.10 networking using basic hardware.) Otherwise, Intel and Linux will works just fine for basic drop-free 10Gbps capture, provided you are eager to roll up your sleeves.

If you're pushing for breakneck speed all the time while doing financial-like or stochastic or large matrix predictive computational crunching (or something), then read-on...

As RedHat have discovered, 67.2 nanosecond is what it takes to process one minimal-sized packet at 10Gbps rate. I assert it's closer to 81.6 nanosecond for 64 byte Ethernet payload but they are talking 46-byte minimal as a theoretical.

To cut it short, you WON'T be able to DO or USE any of the following if you want 0% packet drop at full-rate by staying under 81.6 ns for each packet:

  • Make an SKB call for each packet (to minimize that overhead, amortized this over several 100s of packets)
  • TLB (Translation lookaside buffer, to avoid that, use HUGE page allocations)
  • Short latency (you did say 'capture', so latency is irrelevant here). It's called Interrupt Coalesce (ethtool -C rx-frames 1024+).
  • Float processes across multi-CPU (must lock them down, one per network interface interrupt)
  • libc malloc() (must replace it with a faster one, preferably HUGE-based one)

So, Linux has an edge over FreeBSD to capture the 10Gbps rate in 0% drop rate AND run several virtual machines (and other overheads). Just that it requires a new memory management (MM) of some sort for a specific network device and not necessarily the whole operating system. Most new super-high-performance network driver are now making devices use HUGE memory that were allocated at userland then using driver calls to pass a bundle of packets at a time.

Many new network-driver having repurposed MM are out (in no particular order):

  • netmap
  • PF-RING
  • PF-RING+netmap
  • OpenOnload
  • DPDK
  • PacketShader

The maturity level of each code is highly dependent on which Linux (or distro) version you choose. I've tried a few of them and once I understood the basic design, it became apparent what I needed. YMMV.

Updated: White paper on high speed packet architecture: https://arxiv.org/pdf/1901.10664.pdf

Good luck.


PF_RING is a good solution an alternative can be netsniff-ng ( http://netsniff-ng.org/ ). For both projects the gain of performance is reached by zero-copy mechanisms. Obviously, the bottleneck can be the HD, its data transfer rate.


If you have the time then move to Intel DPDK. It allows for Zero Copy access to the NIC's hardware register. I was able to achieve 0% drops at 10Gbps, 1.5Mpps on a single core. You'll be better off in the long run

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜