Is there a difference in speed between programming languages? [closed]
I mean is there such a huge difference and if so what are the causes for it and which language is the fastest?
Yes, there is speed difference. Programming langauges are like cars. There are Ferraris, Mustangs and then there are Minis. Each serve different purpose and are used differently. The difference is because of the way they are designed and built. Yes, you can replace one with another but using Ferrari sportscar for day to day activity hardly makes any sense. You are the best judge depending on what you want, how much do you have and how you want to use it.
I'm sorry, I couldn't resist posting this. There are other answer which talk about compiled vs. interpreted; execution vs. build time; tiny programs vs. extremely complex programs; domain etc which might make more sense to you...
It depends on what you mean by speed. I'll take your statement to mean execution speed, how fast the code runs once you've completed it. Really, the question isn't which programming language executes faster, but which implementation of a programming language executes the fastest. However, that distinction may be splitting hairs.
Historically, compiled languages run faster than interpreted languages. That is, languages whose source code gets translated directly into something the machine can run (compiled) are faster than languages whose source code runs through another program or virtual machine. It's been a while since I've seen any data on this, but I know that we observed in university circa 2008 that some interpreted languages - Java, for instance - were closing this gap through continued optimization of their virtual machine.
Programming languages don't have speed (asking "how fast is this chair" feels like asking "how many hits do I need to break this board" - way too many variables missing). Programming language implementations have speed for a given piece of code. This speed can also vary greatly depending on the code (PyPy can beat GCC on arithmetic loops with many iterations when dynamic linking comes into play, but it won't stand a chance if for (int i = 0; i < 10; i++) printf("%d", i);
is run one time).
I mean is there such a huge difference
Define "huge". But yeah, there can be extreme differences. A hundred times slower is perfectly possible with the right language implementations and the right benchmark. For most real-world applications, the difference is much slower, partly because language implementations (mostly) can't influence the performance of things like I/O, network communication and external programs. You'll still want to write scientific calculation in Fortran or C (or use C libraries for this from other languages, e.g. NumPy with Python).
what are the causes for it
Too many possibilities to even enumerate the most important ones. Static typing makes native compilation (be it ahead-of-time or just-in-time) much easier, so these languages typically have the speed boost of native compilation. JIT compilers for dynamic languages exist and become increasigly common and good, though - the gap is shrinking, although there always will be some overhead (someone has to manage the dynamicness, after all). But the quality of these compilers also matters. A school project directly outputting assembly code won't fare well against the thousands of optimizations of GCC. Some languages requires nearly no extra work (checks, abstractions, etc.) at runtime while others depend on a whole virtual machine, etc.
And of course, runtime execution speed isn't eveything. Had I programmed the things I wrote in Python in C instead, I'd be old before I'd be done. Or I had given up programming.
精彩评论