Calling handwritten CUDA kernel with thrust
since i needed to sort large arrays of numbers with CUDA, i came along with using thrust. So far, so good...but what when i want to call a "handwritten" kernel, having a thrust::host_vector containing the data?
My approach was (backcopy is missing):
int CUDA_CountAndAdd_Kernel(thrust::host_vector<float> *samples, thrust::host_vector<int> *counts, int n) {
thrust::device_ptr<float> dSamples = thrust::device_malloc<float>(n);
thrust::copy(samples->begin(), samples->end(), dSamples);
thrust::device_ptr<int> dCounts = thrust::device_malloc<int>(n);
thrust::copy(counts->begin(), counts->end(), dCounts);
float *dSamples_raw = thrust::raw_pointer_cast(dSamples);
int *dCounts_raw = t开发者_JS百科hrust::raw_pointer_cast(dCounts);
CUDA_CountAndAdd_Kernel<<<1, n>>>(dSamples_raw, dCounts_raw);
thrust::device_free(dCounts);
thrust::device_free(dSamples);
}
The kernel looks like:
__global__ void CUDA_CountAndAdd_Kernel_Device(float *samples, int *counts)
But compilation fails with:
error: argument of type "float **" is incompatible with parameter of type "thrust::host_vector> *"
Huh?! I thought i was giving float and int raw-pointers? Or am i missing something?
You are calling the kernel with the name of the function the call is in, not the name of the kernel - hence the parameter mismatch.
Change:
CUDA_CountAndAdd_Kernel<<<1, n>>>(dSamples_raw, dCounts_raw);
to
CUDA_CountAndAdd_Kernel_Device<<<1, n>>>(dSamples_raw, dCounts_raw);
and see what happens.
精彩评论