开发者

Minimum number of GPU threads to be effective

开发者_开发问答I'm going to parallelize on CUDA a local search algorithm for some optimization problem. The problem is very hard, so the size of the practically solvable problems is quite small. My concern is that the number of threads planned to run in one kernel is insufficient to obtain any speedup on GPU (even assuming all threads are coalesced, free of bank conflicts, non-branching etc.). Let's say a kernel is launched for 100 threads. Is it reasonable to expect any profit from using GPU? What if the number of threads is 1000? What additional information is needed to analyze the case?


100 threads is not really enough. Ideally you want a size that can be divided in to at least as many thread blocks as there are multiprocessors (SMs) on the GPU, otherwise you will be leaving processors idle. Each thread block should have no fewer than 32 threads, for the same reason. Ideally, you should have a small multiple of 32 threads per block (say 96-512 threads), and if possible, multiple of these blocks per SM.

At a minimum, you should try to have enough threads to cover the arithmetic latency of the SMs, which means that on a Compute Capability 2.0 GPU, you need about 10-16 warps (groups of 32 threads) per SM. They don't all need to come from the same thread block, though. So that means, for example, on a Tesla M2050 GPU with 14 SMs, you would need at least 4480 threads, divided into at least 14 blocks.

That said, fewer threads than this could also provide a speedup -- it depends on many factors. If the computation is bandwidth bound, for example, and you can keep the data in device memory, then you could get a speedup because GPU device memory bandwidth is higher than CPU memory bandwidth. Or, if it is compute bound, and there is a lot of instruction-level parallelism (independent instructions from the same thread), then you won't need as many threads to hide latency. This latter point is described very well in Vladimir Volkov's "Better performance at lower occupancy" talk from GTC 2010.

The main thing is to make sure you use all of the SMs: without doing so you aren't using all of the computation performance or bandwidth the GPU can provide.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜