cuda workflow - possible scenario
The GeForce GTX 560 Ti has 8 SM and each SM has 48 cuda cores (SP). I'm going to launch kernel in this way: kernel<<<1024,1024>>> The SM schedules开发者_开发问答 threads in groups of 32 parallel threads called warps. How will blocks and threads be distributed between 8 SM and 48 SP in each SM ? We have 1024 blocks and 1024 threads so what is possible scenario ? What is the maximum number of threads executing literally at the same time ? What is difference between fermi dual warp scheduler and earlier schedulers ?
The NVIDIA supplied occupancy calculator spreadsheet, which ships in every SDK or is available for download here, can provide the answer to the first three "sub-questions" you have asked.
As for the difference between multiprocessor level scheduling in Fermi compared with earlier architectures, the name ("dual warp scheduler") really says it all. In Fermi, MPs retire instructions from two warps simultaneously, compared to a single warp, as was the case in the first two generations of CUDA capable architectures. If you want a more detailed answer than that, I recommend reading the Fermi architecture whitepaper, available for download here.
精彩评论