开发者

What are the standard parallelism terms for CUDA concepts? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 11 years ago.

Note: Sorry this is not exactly a programming question; please migrate it if there is a more appropriate stackexchange site (I didn't see any; it's not theoretical CS).

I'm looking for less CUDA-specific terms for certain GPU-programming related concepts. OpenCL is somewhat helpful. I'm looking for "parallelism-theory" / research paper words more than practical keywords for a new programming language. Please feel free to post additions and corrections.

开发者_C百科

"warp"

I usually equate this to SIMD-width.

"block"

alternatives

  • "group" (OpenCL).
  • "thread-block" -- want something shorter
  • "tile"

"syncthreads"

It seems "barrier" is the more general word, used in multicore CPU programming I think, and OpenCL. Is there anything else?

"shared memory"

alternatives

  • "local memory" (OpenCL).
  • "tile/block cache"?

"kernel"

alternatives

  • "CTA / Cooperative Thread Array" (OpenCL). way too much of a mouthful, dunno what it means.
  • "GPU program" -- would be difficult to distinguish between kernel invocations.
  • "device computation"?


There aren't really exact enough technology neutral terms for detailed specifics of CUDA and openCL and if you used more generic terms such as "shared memory" or "cache" you wouldn't be making clear precisely what you meant

I think you might have to stick to the terms from one technology (perhaps putting the other in brackets) or use "his/her" type language and add extra explanation if a term doens't have a corresponding use in the other

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜