开发者

How to free __device__ memory in CUDA

__device__ int data; 

__constant__ int var1;

How to free the "data" and "va开发者_运维百科r1" in the CUDA?

Thank you


With Device compute capability of sm_20 and above you can simply use new or delete keyword, even better would be to use CUDA thrust API ( it an implementation of standard template library on top of GPU) really cool stuff .

http://code.google.com/p/thrust/


You can't free it. It gets automatically freed when the program ends.

Similarly, as in host code you don't free global variables.


As @ CygnusX1 said, you can't free it. As you have declared it, the memory will be allocated for the life of your program -- NOTE: Even if you never call the kernel.

You can however use cudaMalloc, and cudaFree (or new/delete within in CUDA 4.0) to allocate and free memory temporarily. Of course you must manipulate everything with pointers, but this is a huge savings if you need to store several large objects, free them, and then store several more large objects...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜