开发者

OpenCL vs OpenMP performance [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 6 years ago.

Improve this question

Have there been any studies comparing OpenCL to OpenMP performance? Specifically I am interested in the overhead cost of launching threads with OpenCL, e.g., if one were to decompose the domain into a very large number of individua开发者_StackOverflowl work items (each run by a thread doing a small job) versus heavier weight threads in OpenMP were the domain was decomposed into sub domains whose number equals the number of cores.

It seems that the OpenCL programming model is more targeted towards massively parallel chips (GPUs, for instance), rather than CPUs that have fewer but more powerful cores.

Can OpenCL be an effective replacement for OpenMP?


The benchmarks I've seen indicate that OpenCL and OpenMP running on the same hardware are usually comparable in performance, or OpenMP has slightly better performance. However, I haven't seen any benchmarks that I would consider conclusive, because they've been mostly lacking in detailed explanations of their methodology. However, there are a few useful things to consider:

  • OpenCL will always have some extra overhead when compiling the kernel at runtime. Any benchmark either needs to list this time separately, use pre-compiled native kernels, or run long enough that the kernel compilation is insignificant.

  • OpenCL implementations will vary. GPU vendors like NVidia have no incentive to make sure their CPU-based OpenCL implementation is as fast as possible. None of the OpenCL implementations are likely to be as mature as a good OpenMP implementation.

  • The OpenCL spec says basically nothing about how CPU-based implementations use threading under the hood, so any discussion of whether the threading is relatively lightweight or heavyweight will necessarily be implementation-specific.

  • When you're running OpenCL code on a CPU, your work items don't have to be tiny and numerous. You can break down the problem in the same way you would for OpenMP.

Even if OpenCL has a bit more overhead, there may be other reasons to prefer it.

  • Obviously, if your code can make good use of a GPU, you will want to have an OpenCL implementation. OpenCL performance on a CPU may be good enough that it isn't worth it to also maintain an OpenMP fallback code path for users who don't have powerful GPUs.

  • A good CPU-based OpenCL implementation means that you will automatically get the benefit of whatever instruction set extensions the CPU and OpenCL implementation support. With OpenMP, you have to do extra work to make sure that your executable includes both SSEx and AVX code paths.

  • OpenCL vector primitives can help you express some explicit parallelism without the portability and readibility sacrifices you get from using SSE intrinsics.


I have a program which has the option to use either openCL or openMP on some key bottlenecks, basically adding vectors and performing reductions.

In my case, openMP takes 13 seconds where openCL takes 10 seconds, on the CPU. Intel I5.

The fastest configuration for me so far is to add the vectors using openCL GPU, and do the reductions on openMP getting me down to 7 seconds. When I do the reduction on the openCL kernel, on GPU, it takes a total of 8 seconds.

So from my experience I would say maybe it depends on the use, and much you can optimize your openCL kernel.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜