开发者

How to program to have all processors on your machine used?

I am running a single-threaded python program that performs massive data processing on my windows box. My machine has 8 processors. When I monitor the CPU usage in perf开发者_如何学Goormance tab under Windows Task Manager, it shows that I am using only a very small fraction of the processing power available to me. Only one processor is being used to the fullest and all the rest are almost idle. What should I do to ensure that all my processors are used? Is multithreading a solution?


multithreading cannot make use of extra processors or cores.

You should spawn new processes instead of new threads.

This tool is by far the simplest among all that I have come across: parallel python

Overview:

PP is a python module which provides mechanism for parallel execution of python code on SMP (systems with multiple processors or cores) and clusters (computers connected via network).

It is light, easy to install and integrate with other python software.

PP is an open source and cross-platform module written in pure python


Multithreading is required for a single process, but it is not necessarily a solution; processor affinity can restrict it to a subset of available cores even if you have more than enough threads to use all.


As an addition to what Jon said, if you're using the standard Python interpreter you should understand the limitations with respect to multi-threading. If your threads are pure-python and aren't making system calls, they can't run concurrently on multiple processors due to the Global Interpreter Lock so the benefits to multi-threading are minimal. In this case, perhaps the recommendation would be to go with multiple processes instead or to switch to another Python implementation such as JPython or IronPython, which do not have a Global Interpreter Lock.


you can get that if your program is of the type that would benefit using python's multiprocessing module

multiprocessing uses multiple python process which avoids problems with the GIL so it's possible to use all of those cores with python code it has a easy threaded map and the basis for more complex schemes

it is similar to parallel python but is limited to the local machine and is included with python 2.6 and higher and is metaphorically similar to python's threading


Assuming your task is parallelizable, then yes, threading is certainly a solution. In particular, if you have a lot of data items to process but they can all be handled independently then it should be relatively straightforward to parallelize.

Using multiple processes instead of multiple threads might be another solution - you haven't told us enough about the problem to say, really.


Do this.

Break your task in to steps or stages. Each step reads something, does part of the overall calculation and writes something.

"""Some Step."""
import json
for some_line in sys.stdin:
    object= json.loads( some_line )
    # process the object
    json.dump( result, sys.stdout )

Something like that ought to do fine.

If you have multiple objects that must be communicated, make a simple dictionary of the objects.

results = { 'a': a, 'b': b }

Connect them in a pipeline, like this.

python step1.py | python step2.py | python step3.py >output_file.dat

If you can break things into 8 or more steps, you will use 8 or more cores. And, BTW, this will be blazingly fast for very little real work.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜