which one is faster 5+5+5+5+5 or 5*5?
I don't know how to ask but just want to ask. help me to tag it please. Anyway, my friend asked me a question that which one is faster in Java
int a = 5 + 5 + 5 + 5 + 5
or
int b = 5 *开发者_如何学运维 5 ?
Is it language dependent ? I mean, a
is faster than b
in java but not in C
my answer is a
is faster than b
because of comparison of addition/multiplication in computer organization
It is platform (and compiler) dependent. If you need to know, then measure it. It's unlikely that you'll be in a situation where you need to know.
However, in both your examples, these will be evaluated at compile time (so no run-time computation will be required); see e.g. http://en.wikipedia.org/wiki/Constant_folding.
In your case is does not change anything. Let's compile:
public class Toto {
public static void main(String[] args) {
int a = 5 + 5 + 5 + 5 + 5;
int b = 5 * 5;
}
}
and check the decompilation result:
public class Toto
{
public static void main(String args[])
{
byte byte0 = 25;
byte byte1 = 25;
}
}
The compiler did inline everything.
Both are constant expressions, so they will be simplified to
int a = 25;
int b = 25;
at compile time (100% sure, even toy compilers do this, as it is one of the simplest optimisations possible).
In the remote case those operations are not simplified, assuming that there is a JIT that maps the multiplication and add opcodes in a 1:1 relationship to their CPU instruction counterparts, in most modern architectures all integer arithmetic operations usually take the same number of cycles, so it will be faster multiply once than add four times (just checked it, addition is still slightly faster than multiplication: 1 clock vs 3 clocks, so it still pays using a multiplication here).
Even in a super-scalar architecture, where more than one instruction can be emitted per cycle, the chain of add operations has a data dependency, so they will have to execute sequentially (and as the add instruction is just 1 cycle, there's no possible pipeline overlapping, it will still take 4 cycles).
In most CPU architectures, the optimal instruction sequence will probably be a shift of two positions to the left and then adding the original value (5<<2+5
).
There are at least two questions here: the performance of the underlying operations, and what the compiler does. (In fact, what both the Java-to-bytecode compiler and the JIT compiler do.)
Firstly, the question of the "raw" operations. As a general rule, addition, subtraction and multiplication take roughly the same time on a large number of processors. You might imagine that multiplication is a lot slower, but it turns out not to be. Have a look, for example, at this paper giving some experimental timings of X86 instructions on various processors. Multiplication is ever so slightly "slower" overall in that it has a higher latency. What that effectively means is that if the processor was doing nothing but a series of multiplications on different pieces of data, it would be slightly slower than doing a series of additions on different pieces of data. But provided that there are other instructions round about that can be executing "while the multiplication is finishing off", then there ends up not being much difference overall between addition and multiplication.
I also made a list a while ago of the timings of floating point instructions used by Hotspot on a 32-bit Pentium (the figures came originally from Intel's processor manual, and as I recall I did test experimentally that in practice these are the timings you get). Notice that there's a very similar pattern: addition, subtraction and multiplication essentially take the same time as one another; division is notably slower.
Then, if you look at the table on the page I just mentioned, you'll see that divisions by a power of 2 are faster because the JIT compiler can translate these into a multiplication. Powers of two can be represented exactly in floating point representation, so there is no loss of precision if you replace a division by x by a multiplicaiton by 1/x where x is a power of 2.
Or in other words, both the Java compiler and JIT compiler can apply various optimisations which mean that the underlying instructions for a given piece of code aren't necessarily what you think they are. As others have mentioned, one very basic piece of optimisation is to pre-compute values, so that if you write "5+5+5+5+5", in reality the Java compiler should replace this with "25".
It all depends on the environment you are using:
Which compiler? Is it a good one, then it compiles it to constants.
What's the rest of the program code? If the result is not used, it both compiles to NOP (no operation).
which hardware is it running on? If you have a processor that is super-optimized for multiplications and not for adding, multiplications might (in theory) be faster than add-operation
etc.
In most cases you shouldn't care about what is faster, most compilers are smarter than you and will optimize it for you.
If you really care about it, you probably wouldn't have asked the question here, but in that case: benchmark it. Create two programs A and B, use them both in a typical real-life scenario, measure the time/energy/whatever parameter you're interested in, based on the results, decide which program is better for your needs.
We should compare time complexity. The function is f1(n) = n * c
, the equivalent f2(n) = Sum[1->n] c
.
The complexity for multiplication is O(1)
(constant time, one calculation for any n
), the complexity for addition is O(n)
(linear time, number of additions is equal to n).
It depends on compiler because each compiler having different mechanism for this some using left shift operation or some uses another mechanism.
But in many cases addition is faster than multiplication.
I guess Addition is faster than multiplication, because (as far as i know) all multiplications are treated as additions
Added :
read here for some explanation
精彩评论