开发者

GPGPU matrix addition problem

I have huge huge matric开发者_如何转开发es and I want the output of the matrix to be of the same size as input matrix, just with each cell getting sum of numbers from adjacent cell.

Can you guide me how to approach it on a GPGPU platform using CUDA?


You have to pass all the adjacent cells' values to your kernel (as parameters) so you ll be able to do the sum. Something like this in the parameterlist and the code right after: ( int actualCellvalue, int adj1, int adj2, int adj3...)

{ actualCellvalue = actualCellvalue + adj1 + adj2 + adj3....; }

This might be wrong, but thats what i figured out from your really short description.

Regards, Peter

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜