When I increase the unrolling from 8 to 9 loops in my kernel, it breaks with an out of resources error.
I have problems passing the right parameters to the prepare function (and to the prepared_call) to allocate of shared memory in PyCUDA. I understand the error message in this way, that one of the vari
I\'m getting an out-of-resources error when trying to launch a CUDA kernel (through PyCUDA), and I\'m wondering if it\'s possible to get the system to tell me which resource it is that I\'m short开发者
When I create a new session and tell the Visual Profiler to launch my python/pycuda scripts I getfollowing error message: Execution run #1 of program \'\' failed, exit code: 255
Closed. This question needs debugging details. It is not currently accepting answers. Edit the question to in开发者_如何学运维clude desired behavior, a specific problem or error, and the sho
In a Linux system with multiple GPUs, how can you determine which GPU is running X11 and which is completely free to run CUDA kernels? In a system that has a low powered GPU to run X11 and a higher po
I\'ll try and make this clear; I\'ve got two classes; GPU(Object), for general access to GPU functionality, and multifunc(threading.Thread) for a particular function I\'m trying to multi-device-ify.
Anyone following CUDA will probably have seen a few of my queries regarding a project I\'m involved in, but for those who haven\'t I\'ll summarize. (Sorry for the long question in advance)
PyCUDA, for all its faults, usually has very good examples provided with it / downloadable from the wiki. But I couldn\'t find anything in the examples or in the documentation (or a cursory google sea
As part of a larger project, I\'ve come across a strangely consistent bug that I can\'t get my head around, but is an archetypical \'black box\' bug; when running with cuda-gdb python -m pycuda.debug