boost::asio threadpool vs. io_service_per_cpu design
Currently I´m not sure, I try to make a high-performance server, I got a 6Core CPU, so if I would use the "io_service_per_cpu" design, I have 6 io_service´s.
I already heard that the threadpool design isn´t the best one, but I´m not sure about that.
What knowledge do you 开发者_Python百科have? Someone already made a Stress test with each, or something else?
In my experience it is vastly easier to approach asynchronous application design with the following order:
- single thread and a single
io_service
- multiple threads, each invoking
io_service::run()
from a singleio_service
. Use strands for handlers that require access to shared data structures. io_service
per cpu
The motivation for changing between each of these designs should be done after profiling your application. Note that the HTTP Server 2 example only shows how to use an io_service
per CPU, it does not show you when or why to use such a design.
Another good way to approach this: start up multiple copies of your process and bind each one to a different core using your OS'es. For FreeBSD, use cpuset
. The OS is going to do a better job than any userland code will. Then you need to use an external load balancer to distribute load across your server instances. Extra points for binding a NIC interrupts to a particular CPU.
精彩评论