开发者

Death of the Cell processor

in the last times I heard lots of people claiming that the Cell processor is dead, mainly due to the following reasons:

  • Lack of support in the new playstation 3, as the user can not install linux
  • The increasing processing power of the GPU's and its costs sinking
  • The existence of a unified programming approach (openCL) fo开发者_C百科r different GPU's and not for the CBE (well today was announced for the Cell!)
  • Carency of real world examples of use of the cell (apart from the academic circles)
  • Global feeling of unsuccess

What do you think? If you started two or three years ago to program the cell, will you continue on this or are you considering switching to GPU's? Is a new version of the cell coming?

Thanks


I'd say the reasons for the lack of popularity for cell development are closer to:

  • The lack of success in the PS3 (due to many mistakes on Sony's part and strong competition from the XBOX 360)
  • Low manufacturing yield, high cost (partly due to low yield), and lack of affordable hardware systems other than the PS3
  • Development difficulty (the cell is an unusual processor to design for and the tooling is lacking)
  • Failure to achieve significant performance differences compared to existing x86 based commodity hardware. Even the XBOX 360's several year old triple core Power architecture processor has proven competitive, compared to a modern Core2 Quad processor the cell's advantages just aren't evident.
  • Increasing competition from GPU general purpose computing platforms such as CUDA


It's easier to write parallel programs for 1000s of threads than it is for 10s of threads. GPUs have 1000s of threads, with hardware thread scheduling and load balancing. Although current GPUs are suited mainly for data parallel small kernels, they have tools that make doing such programming trivial. Cell has only a few, order of 10s, of processors in consumer configurations. (The Cell derivatives used in supercomputers cross the line, and have 100s of processors.)

IMHO one of the biggest problems with Cell was lack of an instruction cache. (I argued this vociferously with the Cell architects on a plane back from the MICRO conference Barcelona in 2005. Although they disagreed with me, I have heard the same from bigsuper computer users of cell.) People can cope with fitting into fixed size data memories - GPUs have the same problem, although they complain. But fitting code into fixed size instruction memory is a pain. Add an IF statement, and performance may fall off a cliff because you have to start using overlays. It's a lot easier to control your data structures than it is to avoid having to add code to fix bugs late in the development cycle.

GPUs originally had the same problems as cell - no caches, neither I nor D.

But GPUs did more threads, data parallelism so much better than Cell, that they ate up that market. Leaving Cell only its locked in console customers, and codes that were more complicated than GPUs, but less complicated than CPU code. Squeezed in the middle.

And, in the meantime, GPUs are adding I$ and D$. So they are becoming easier to program.


Why did Cell die?

1) The SDK was horrid. I saw some very bright developers about scratch their eyes out pouring through IBM mailing lists trying to figure out this problem or that with the Cell SDK.

2) The bus between compute units was starting to show scaling problems and never would have made it to 32 cores.

3) OpenCl was about 3-4 years too late to be of any use.


If you started two or three years ago to program the cell, will you continue on this or are you considering switching to GPU's?

I would have thought that 90% of the people who program for the Cell processor are not in a position where they can arbitrarily decide to stop programming for it. Are you aiming this question at a very specific development community?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜