PyCUDA+Threading = Invalid Handles on kernel invocations
I'll try and make this clear;
I've got two classes; GPU(Object)
, for general access to GPU functionality, and multifunc(threading.Thread)
for a particular function I'm trying to multi-device-ify. GPU
contains most of the 'first time' processing needed for all subsequent usecases, so multifunc
gets called from GPU
with its self
instance passed as an __init__
argument (along with the usual queues and such).
Unfortunately, multifunc
craps out with:
File "/home/bolster/workspace/project/gpu.py", line 438, in run
prepare(d_A,d_B,d_XTG,offset,grid=N_grid,block=N_block)
File "/usr/local/lib/python2.7/dist-packages/pycuda-0.94.2-py2.7-linux-x86_64.egg/pycuda/driver.py", line 158, in function_call
func.set_block_shape(*block)
LogicError: cuFuncSetBlockShape faile开发者_StackOverflow社区d: invalid handle
First port of call was of course the block dimensions, but they are well within range (same behaviour even if I force block=(1,1,1)
, likewise grid.
Basically, within multifunc
, all of the usual CUDA memalloc etc functions work fine, (implying its not a context problem) So the problem must be with the SourceModule
ing of the kernel function itself.
I have a kernel template containing all my CUDA code that's file-scoped, and templating is done with jinja2
in the GPU
initialisation. Regardless of whether that templated object is converted to a SourceModule
object in GPU
and passed to multifunc
, or if its converted in multifunc
the same thing happens.
Google has been largely useless for this particular issue, but following the stack, I'm assuming the Invalid Handle
being referred to is the kernel function handle rather than anything strange going on with the block dimensions.
I'm aware this is a very corner-case situation, but I'm sure someone can see a problem that I've missed.
The reason is context affinity. Every CUDA function instance is tied to a context, and they are not portable (the same applies to memory allocations and texture references). So each context must load the function instance separately, and then use the function handle returned by that load operation.
If you are not using metaprogramming at all, you might find it simpler to compile your CUDA code to a cubin file, and then load the functions you need from the cubin to each context with driver.module_from_file
. Cutting and pasting directly from some production code of mine:
# Context establishment
try:
if (autoinit):
import pycuda.autoinit
self.context = None
self.device = pycuda.autoinit.device
self.computecc = self.device.compute_capability()
else:
driver.init()
self.context = tools.make_default_context()
self.device = self.context.get_device()
self.computecc = self.device.compute_capability()
# GPU code initialization
# load pre compiled CUDA code from cubin file
# Select the cubin based on the supplied dtype
# cubin names contain C++ mangling because of
# templating. Ugly but no easy way around it
if self.computecc == (1,3):
self.fimcubin = "fim_sm13.cubin"
elif self.computecc[0] == 2:
self.fimcubin = "fim_sm20.cubin"
else:
raise NotImplementedError("GPU architecture not supported")
fimmod = driver.module_from_file(self.fimcubin)
IterateName32 = "_Z10fimIterateIfLj8EEvPKT_PKiPS0_PiS0_S0_S0_jjji"
IterateName64 = "_Z10fimIterateIdLj8EEvPKT_PKiPS0_PiS0_S0_S0_jjji"
if (self.dtype == np.float32):
IterateName = IterateName32
elif (self.dtype == np.float64):
IterateName = IterateName64
else:
raise TypeError
self.fimIterate = fimmod.get_function(IterateName)
except ImportError:
warn("Could not initialise CUDA context")
Typical; as soon as I write the question I work it out.
The issue was having the SourceModule operating outside of an active context. To fix it I moved the SourceModule invocation into the run function in the thread, below the cuda context setup.
Leaving this up for a while because I'm sure someone else has a better explanation!
精彩评论