Cuda 4.0 vs 3.2
Is CUDA 4.0 faster than 3.2?
I am not interested in the additions of CUDA 4.0 but rather in knowing if memory开发者_StackOverflow中文版 allocation and transfer will be faster if I used CUDA 4.0. ThanksMemory allocation and transfer depend more (if not exclusively) on the hardware capabilities (more efficient pipelines, size of cache), not the version of CUDA.
Even while on CUDA 3.2, you can install the CUDA 4.0 drivers (270.x) -- drivers are backward compatible. So you can test that apart from re-compiling your application. It is true that there are driver-level optimizations that affect run-time performance.
While generally that has worked fine on Linux, I have noticed some hiccups on MacOSX.
Yes, I have a fairly substantial application which ran ~10% faster once I switched from 3.2 to 4.0. This is without any code changes to take advantage of new features.
I also have a GTX480 if that matters any.
Note that the performance gains may be due to the fact that I'm using a newer version of dev drivers (installed automatically when you upgrade). I image nVidia may well be tweaking for CUDA performance the way they do for blockbuster games like Crysis.
Performance of memory allocation mostly depends on host platform (because the driver models differ) and driver implementation. For large amounts of device memory, allocation performance is unlikely to vary from one CUDA version to the next; for smaller amounts (say less than 128K), policy changes in the driver suballocator may affect performance.
For pinned memory, CUDA 4.0 is a special case because it introduced some major policy changes on UVA-capable systems. First of all, on initialization the driver does some huge virtual address reservations. Secondly, all pinned memory is portable, so must be mapped for every GPU in the system.
Performance of PCI Express transfers is mostly an artifact of the platform, and usually there is not much a developer can do to control it. (For small CUDA memcpy's, driver overhead may vary from one CUDA version to another.) One issue is that on systems with multiple I/O hubs, nonlocal DMA accesses go across the HT/QPI link and so are much slower. If you're targeting such systems, use NUMA APIs to steer memory allocations (and threads) onto the same CPU that the GPU is plugged into.
The answer is Yes because CUDA 4.0 reduce the system memory usage and the CPU memcpy() overhead
精彩评论