I am trying to compile a cuda project that someone sent me. Though the compile stage passes, the link stage is failing. Below is an example of the error:
is it possible to do an atomic write on the block level? as an example consider the following: __global__ kernel (int atomic)
i have a kernel launched several times, untill a solution is found. the solution will be found by at least one block.
I\'ve been trying to use openCL to do some calculations, but the results are incorrect. I input three float3\'s that look like this:
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow.
The codeproject.com showcase Part 2: OpenCL™ – Memory Spaces states that Global memory should be considered as streaming memory [...] and that the best performance will be achieved when streaming co
I am trying to implement a GEMM implmentation using AMD-APP-SDK 2.4 on a ATI HD 6990 card (Cayman architecture).
I learnt today that in NVIDIA GPUs there are in the vertex uni开发者_JAVA百科t special hardware functions for calculating linear interpolation in a 3D regular grid. I wonder if there are more of this
I am working on a im开发者_Python百科age processing project which utilizes cuda for gpgpu imlementation. I want to know is there cuda support enabled on NVIDIA\'S tegra2 chip.
I work on an audio processing project that needs to do a lot of basic computations (+, -, *) like a FFT (Fast Fourier Transform) calculation.