how can i improve my multi threading usage to decrease run time, python
Machine Info
cpu_num 8 CPU开发者_如何学编程s
cpu_speed 2826 MHz
mem_total 8173980 KB
swap_total 16777208 KB
Benchmarking
When I increase the number of threads the performances gains i get look like (the numbers are averaged over 10 runs)
Number of Threads Time
1 1.322187
2 0.789151
3 0.72232
5 0.613691
10 0.558912
40 0.531966
snapshot from top while running the code
top - 01:40:42 up 7 days, 13:24, 9 users, load average: 0.34, 0.22, 0.27
Tasks: 364 total, 2 running, 362 sleeping, 0 stopped, 0 zombie
Cpu(s): 28.2%us, 0.1%sy, 0.0%ni, 71.5%id, 0.0%wa, 0.1%hi, 0.0%si, 0.0%st
Mem: 8173980k total, 7437664k used, 736316k free, 224748k buffers
Swap: 16777208k total, 149244k used, 16627964k free, 6374428k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20365 ben.long 15 0 723m 208m 4224 S 226.2 2.6 0:37.28 python26
19948 ben.long 15 0 10996 1256 764 R 0.7 0.0 0:03.84 top 4420 ben.long 15 0 106m 3776 1360 R 0.0 0.0 0:03.06 sshd
4421 ben.long 15 0 64320 1628 1180 S 0.0 0.0 0:00.07 bash 4423 ben.long 15 0 64320 1596 1180 S 0.0 0.0 0:00.03 bash
19949 ben.long 15 0 64308 1552 1136 S 0.0 0.0 0:00.01 bash
Code
the stripped down code looks like
from threading import Thread
class testit(Thread):
def __init__ (self,i):
Thread.__init__(self)
def run(self):
some_task()#do processor heavy task
num_threads_to_use = 10
thread_list = []
for i in range (0,num_threads_to_use):
current = testit(i)
thread_list.append(current)
current.start()
for thread in thread_list:
thread.join()
Questions
- Should I be using the multiprocessing module instead of the threading module?
- Is there a way to improve the solution below?
The reason behind a non linear increment of performance as the number of threads approach the number of cores might lie in this:
some_task()#do processor heavy task
The GIL is released around I/O heavy operations; if some_task() is CPU bound you are just occupying the GIL one thread at the time, thus sacrificing the benefit of threads (and maybe losing time in too many context switches ).
From http://docs.python.org/c-api/init.html:
The Python interpreter is not fully thread-safe. In order to support multi-threaded Python programs, there’s a global lock, called the global interpreter lock or GIL, that must be held by the current thread before it can safely access Python objects. Without the lock, even the simplest operations could cause problems in a multi-threaded program: for example, when two threads simultaneously increment the reference count of the same object, the reference count could end up being incremented only once instead of twice.
Therefore, the rule exists that only the thread that has acquired the global interpreter lock may operate on Python objects or call Python/C API functions. In order to support multi-threaded Python programs, the interpreter regularly releases and reacquires the lock — by default, every 100 bytecode instructions (this can be changed with sys.setcheckinterval()). The lock is also released and reacquired around potentially blocking I/O operations like reading or writing a file, so that other threads can run while the thread that requests the I/O is waiting for the I/O operation to complete.
I might be wrong, butis my guess that threads share the same GIL, but processes don't. Try with the multiprocessing module instead.
If you are doing some CPU intensive task, only way in python to speed up that is to use multiple processes.
Other alternatives are
- Use different implementation of python e.g. IronPython or Jython
- Write CPU intensive code as C modules
精彩评论