开发者

Convolutions on the gpu: Which language (HLSL/Cuda etc) will have the longest support lifetime?

I'm currently writing an automated inspection system that uses scale-space representation for ridge and edge detection. It's currently got a software implementation, but i think GPU is the way to go. My algorithm is a series of convolutions of various kernels.

However, my company has previously done everything on the Cpu (And thats alot of automated inspection), so i'm gonna have a hard time convincing my boss that it's necessary, and support/longevity is a big part of that. We're gonna be supporting these things for about 10 years, in all likelihood.

So which la开发者_开发问答nguage has the best support guarantees?

Ps. We run windows on everything.


None have any guarrantees !

It depends what you mean by 'support'

  • Does new hardware run the code - since a driver converts the cuda code into hw instructions then it should be easy to support at least a subset of them on new GPUs, so long as NVidia stay in business. Of course a lot of graphics card companies are now gone and the games business isn't as fanatical about backward compatibility as Microsoft. So OpenCL might be a safer bet.

  • Does the manufacturer offer tech support on it - if it's a standard then there will be support for it. SGI are long gone but OpenGL is still well supported.

  • Can you develop the code on new hardware - again since there is essentially a byte-code layer it's not as bad as trying to find a MIPS development platform to develop MIPs code on.

  • Can you hire programmers that know it - it's a specialized area so you are probably going to have to bring people upto speed, or hire some expensive talent. Whether they need training in CUDA, OpenCL or whatever comes next isn't a big deal compared to the general skills of GPGPU type programming.

On the whole I suspect that CUDA/OpenCL will be a better long term bet than hand tweaking SSE2 code for the current Intel CPU generation or using some custom DSP/FPGA solution.

10years really isn't as long as you think in the software world, there are a lot of MFC apps still being used and of course OpenGL is still pretty well supported. I wouldn't have though CUDA is going to go away - and if it did I would expect tools to translate it into OPENCL or whatever replaces it.

In fact industry opinion seems to be that DirectX/OpenGL will go away and everythign will be done directly in a GPGPU langauge.


If all you're looking for is support, then OpenCL is likely to be a better bet for the long haul.

That said, CUDA is unlikely to go away in 10 years or so. nVidia has made a huge investment into GPGPU computation and will likely maintain backward compatibility for all their future chipsets.

In terms of raw performance (probably the reason you are switching to GPUs in the first place), CUDA still has a slight edge. Hiring devs for CUDA is also likely to be a bit easier as is the speed of development due to being a more mature technology.


Martin Beckett and peakxu have given good answers; let me just add something that's a bit too big to fit in a comment:

The thing about CUDA vs OpenCL is that the kernels -- where almost all the hard work is done -- are very similar, even though the keywords are different. By far the hardest part of GPGPU programming is figuring out how to effectively break your program into fine-grained SIMD-ish pieces that perform well. Once you've got that figured out, the resulting kernels are pretty easily shuffled back and forth between CUDA and OpenCL, and I imagine whatever comes next.

The boilerplate code for allocating memory, shuffling data back and forth between host & GPU, etc, is much less similar, but compared to the kernels, rewriting that stuff is comparatively straightforward. (Tedious as hell, but straightforward).

So I wouldn't spend a lot of time reading tea leaves to try and guess which will last longer between CUDA and OpenCL. If you do decide to go this way, just find hardware and (maybe more importantly) a development platform that suits your needs, then choose the GPGPU language best stuited to that, and run with it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜