开发者

How slow (how many cycles) is calculating a square root?

How slow (how many cycles) is calculating a square root? This came up in a molecular dynamics course where efficiency is important and taking unneces开发者_C百科sary square roots had a noticeable impact on the running time of the algorithms.


Based on Agner Fog's instruction tables for Core 2 65nm when comparing the SSE performance with FSQRT, FDIV, FMUL and FADD, it is about equal, but looks faster because it can't do 80bit math. SSE has a super fast approximate reciprocal and approximate reciprocal sqrt though.

On Core2 45nm, FSQRT and FDIV root got faster, while FADD and FMUL haven't changed. Once again SSE performance is about the same.

Intel Core 2 (Merom, 65nm)

Instruction Operands Latency Reciprocal
throughput
FSQRT 6 - 69
FADD(P) r 3 1
FMUL(P) r 5 2
FDIV(R)(P) r 6 - 38 d 5 - 37 d
ADDSS/D xmm, xmm 3 1
ADDPS/D xmm, xmm 3 1
MULSS xmm, xmm 4 1
MULSD xmm, xmm 5 1
MULPS xmm, xmm 4 1
MULPD xmm, xmm 5 1
DIVSS xmm, xmm 6 - 18 d 5 - 17 d
DIVSD xmm, xmm 6 - 32 d 5 - 31 d
DIVPS xmm, xmm 6 - 18 d 5 - 17 d
DIVPD xmm, xmm 6 - 32 d 5 - 31 d
SQRTSS/PS xmm, xmm 6 - 29 6 - 29
SQRTSD/PD xmm, xmm 6 - 58 6 - 58
RSQRTSS/PS xmm, xmm 3 2

Intel Core 2 (Wolfdale, 45nm)

Instruction Operands Latency Reciprocal
throughput
FSQRT 6 - 20
FADD(P) r 3 1
FMUL(P) r 5 2
FDIV(R)(P) r 6 - 21 d 5 - 20 d
ADDSS/D xmm, xmm 3 1
ADDPS/D xmm, xmm 3 1
MULSS xmm, xmm 4 1
MULSD xmm, xmm 5 1
MULPS xmm, xmm 4 1
MULPD xmm, xmm 5 1
DIVSS xmm, xmm 6 - 13 d 5 - 12 d
DIVSD xmm, xmm 6 - 21 d 5 - 20 d
DIVPS xmm, xmm 6 - 13 d 5 - 12 d
DIVPD xmm, xmm 6 - 21 d 5 - 20 d
SQRTSS/PS xmm, xmm 6 - 13 5 - 12
SQRTSD/PD xmm, xmm 6 - 20 5 - 19
RSQRTSS/PS xmm, xmm 3 2

The figures in the instruction tables represent the results of my measurements rather than the official values published by microprocessor vendors. Some values in my tables are higher or lower than the values published elsewhere.

Latency: This is the delay that the instruction generates in a dependency chain. The numbers are minimum values. Cache misses, misalignment, and exceptions may increase the clock counts considerably. Floating point operands are presumed to be normal numbers. Denormal numbers, NAN's and infinity increase the delays very much, except in XMM move, shuffle and Boolean instructions. Floating point overflow, underflow, denormal or NAN results give a similar delay. The time unit used is core clock cycles, not the reference clock cycles given by the time stamp counter.

Reciprocal throughput: The average number of core clock cycles per instruction for a series of independent instructions of the same kind in the same thread.

d Round divisors or low precision give low values.


Square root is about 4 times slower than addition using -O2, or about 13 times slower without using -O2. Elsewhere on the net I found estimates of 50-100 cycles which may be true, but it's not a relative measure of cost that is very useful, so I threw together the code below to make a relative measurement. Let me know if you see any problems with the test code.

The code below was run on an Intel Core i3 under Windows 7 operating system and was compiled in DevC++ (which uses GCC). Your mileage may vary.

#include <cstdlib>
#include <iostream>
#include <cmath>

/*
Output using -O2:

1 billion square roots running time: 14738ms

1 billion additions running time   : 3719ms

Press any key to continue . . .

Output without -O2:

10 million square roots running time: 870ms

10 million additions running time   : 66ms

Press any key to continue . . .

Results:

Square root is about 4 times slower than addition using -O2,
            or about 13 times slower without using -O2
*/

int main(int argc, char *argv[]) {

    const int cycles = 100000;
    const int subcycles = 10000;

    double squares[cycles];

    for ( int i = 0; i < cycles; ++i ) {
        squares[i] = rand();
    }

    std::clock_t start = std::clock();

    for ( int i = 0; i < cycles; ++i ) {
        for ( int j = 0; j < subcycles; ++j ) {
            squares[i] = sqrt(squares[i]);
        }
    }

    double time_ms = ( ( std::clock() - start ) / (double) CLOCKS_PER_SEC ) * 1000;

    std::cout << "1 billion square roots running time: " << time_ms << "ms" << std::endl;

    start = std::clock();

    for ( int i = 0; i < cycles; ++i ) {
        for ( int j = 0; j < subcycles; ++j ) {
            squares[i] = squares[i] + squares[i];
        }
    }

    time_ms = ( ( std::clock() - start ) / (double) CLOCKS_PER_SEC ) * 1000;

    std::cout << "1 billion additions running time   : " << time_ms << "ms" << std::endl;

    system("PAUSE");
    return EXIT_SUCCESS;
}


Square root takes several cycles, but it takes orders of magnitude more to access memory if it is not in cache. Therefore, trying to avoid computations by fetching pre-computed results from memory may actually be detrimental to performance.

It's difficult to say in the abstract whether you might gain or not, so if you want to know for sure, try and benchmark both approaches.

Here's a great talk on the matter by Eric Brummer, Compiler Developer on MSVC: http://channel9.msdn.com/Events/Build/2013/4-329

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜