开发者

CUDA convolution - non separable kernels

I need to implement an efficient version of an image convolution with non-separable kernels (so CUDA's sdk is useful just for the FFT example, but it is clearly stated that it works great only for big kernel sizes)

Aside from implementing it from scratch as comes to my mind, my need is to operate on priori-unknown-sizes matrices and kernels开发者_StackOverflow中文版 (they can be 10x10 as 20.000x20.000, I simply can't predict it)

What are your suggestions regarding the FFT example? (if this is your best pick, please provide me some good point to start figuring out how that works)

And for the second pick (manually implementing the convolution by myself), what the suggestions to maximize memory coalescence?


My suggestion with the gpu:

  1. First make it right. Get confortable with the algorithm that you want to implement on the GPU first on the CPU. You will have to deal with so many more low level details, so is important that you know what the output must be.

  2. Make it fast. FFT approach is the fastest one if you can use it (most of the cases).

To reach your first objective I advise you to try to implement it with OpenCv. It has a very nice wrapper for python and provide a framework for filtering

Once you are sure of your result and how you achieve that with OpenCv, test if you can do the same using FFT. Porting the whole on the GPU would be much easier


You might want to look at the implementation of convolution in theano (they use non-FFT-based kernels)...or just use theano.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜