开发者

is MPI widely used today in HPC? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly 开发者_高级运维reopened, visit the help center for guidance. Closed 10 years ago.

is MPI widely used today in HPC?


A substantial majority of the multi-node simulation jobs that run on clusters everywhere is MPI. The most popular alternatives include things like GASNet which support PGAS languages; the infrastructure for Charm++; and probably linda spaces get an honourable mention, just due to the number of core-hours being spent running Gaussian. In HPC, UPC, co-array fortran/HPF, PVM etc ends up dividing up the tiny fraction that is left.

Any time you read in the science news about a simulation of a supernova, or of formula-one racing teams using simulation to "virtual wind-tunnel" their cars before making design changes, there's an excellent chance that it is MPI under the hood.

It's arguably a shame that it is so widely used by technical computing people - that there aren't more popular general-purpose higher-level tools which get the same uptake - but that's where we are at the moment.


I worked for 2 years in the HPC area and can say that 99% of cluster applications was written using MPI.


MPI is widely used in high performance computing, but some machines try to boost performance by combining deploying shared memory compute nodes, which usually use OpenMP. In those cases the application would uses MPI and OpenMP to get optimal performance. Also some systems use GPUs to improve performance, I am not sure about how well MPI supports this particular execution model.

But the short answer would be yes. MPI is widely used in HPC.


It's widely used on clusters. Often it's the only way that a certain machine supports multi-node jobs. There are other abstractions like UPC or StarP, but those are usually implemented with MPI.


Yes, for example, Top500 super computers are benchmarked using LINPACK (MPI based).


Speaking about HPC, MPI is the main tool even nowdays. Although GPUs are strongly hitting HPC, MPI is still top 1.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜