开发者

Can my loop be optimized any more?

Below is my innermost loop that's run several thousand times, with input sizes of 20 - 1000 or more. This piece of code takes up 99 - 99.5% of execution time. Is there anything I can do to help squeeze any more performance out of this?

I'm not looking to move this code to something like using tree codes (Barnes-Hut), but towards optimizing the actual calculations happening inside, since the same calculations occur in the Barnes-Hut algorithm.

Any help is appreciated!

Edit: I'm running in Windows 7 64-bit with Visual Studio 2008 edition on a Core 2 Duo T5850 (2.16 GHz)

typedef double real;

struct Particle
{
    Vector pos, vel, acc, jerk;
    Vector oldPos, oldVel, oldAcc, oldJerk;
    real mass;
};

class Vector
{
private:
    real vec[3];

public:
    // Operators defined here
};

real Gravity::interact(Particle *p, size_t numParticles)
{
    PROFILE_FUNC();
    real tau_q = 1e300;

    for (size_t i = 0; i < numParticles; i++)
    {
        p[i].jerk = 0;
        p[i].acc = 0;
    }

    for (size_t i = 0; i < numParticles; i++)
    {
        for (size_t j = i+1; j < numParticles; j++)
        {
            Vector r = p[j].pos - p[i].pos;
            Vector v = p[j].vel - p[i].vel;
            real r2 = lengthsq(r);
            real v2 = lengthsq(v);

            // Calculate inverse of |r|^3
            real r3i = Consta开发者_运维知识库nts::G * pow(r2, -1.5);

            // da = r / |r|^3
            // dj = (v / |r|^3 - 3 * (r . v) * r / |r|^5
            Vector da = r * r3i;
            Vector dj = (v - r * (3 * dot(r, v) / r2)) * r3i;

            // Calculate new acceleration and jerk
            p[i].acc += da * p[j].mass;
            p[i].jerk += dj * p[j].mass;
            p[j].acc -= da * p[i].mass;
            p[j].jerk -= dj * p[i].mass;

            // Collision estimation
            // Metric 1) tau = |r|^2 / |a(j) - a(i)|
            // Metric 2) tau = |r|^4 / |v|^4
            real mij = p[i].mass + p[j].mass;
            real tau_est_q1 = r2 / (lengthsq(da) * mij * mij);
            real tau_est_q2 = (r2*r2) / (v2*v2);

            if (tau_est_q1 < tau_q)
                tau_q = tau_est_q1;
            if (tau_est_q2 < tau_q)
                tau_q = tau_est_q2;
        }
    }

    return sqrt(sqrt(tau_q));
}


  1. Inline the calls to lengthsq().

  2. Change pow(r2,-1.5) to 1/(r2*sqrt(r2)) to lower the cost of the computing r^1.5

  3. Use scalars (p_i_acc, etc.) inside the innner most loop rather than p[i].acc to collect your result. The compiler may not know that p[i] isn't aliased with p[j], and that might force addressing of p[i] on each loop iteration unnecessarily.

4a. Try replacing the if (...) tau_q = with

    tau_q=minimum(...,...)

Many compilers recognize the mininum function as one they can do with predicated operations rather than real branches, avoiding pipeline flushes.

4b. [EDIT to split 4a and 4b apart] You might consider storing tau_..q2 instead as tau_q, and comparing against r2/v2 rather than r2*r2/v2*v2. Then you avoid doing two multiplies for each iteration in the inner loop, in trade for a single squaring operation to compute tau..q2 at the end. To do this, collect minimums of tau_q1 and tau_q2 (not squared) separately, and take the minimum of those results in a single scalar operation on completion of the loop]

  1. [EDIT: I suggested the following, but in fact it isn't valid for the OP's code, because of the way he updates in the loop.] Fold the two loops together. With the two loops and large enough set of particles, you thrash the cache and force a refetch from non-cache of those initial values in the second loop. The fold is trivial to do.

Beyond this you need to consider a) loop unrolling, b) vectorizing (using SIMD instructions; either hand coding assembler or using the Intel compiler, which is supposed to be pretty good at this [but I have no experience with it], and c) going multicore (using OpenMP).


This line real r3i = Constants::G * pow(r2, -1.5); is going to hurt. Any kind of sqrt lookup or platform specific help with a square root would help.

If you have simd abilities, breaking up your vector subtracts and squares into its own loop and computing them all at once will help a bit. Same for your mass/jerk calcs.

Something that comes to mind is - are you keeping enough precision with your calc? Taking things to the 4th power and 4th root really thrash your available bits through the under/overflow blender. I'd be sure that your answer is indeed your answer when complete.

Beyond that, it's a math heavy function that will require some CPU time. Assembler optimization of this isn't going to yield too much more than the compiler can already do for you.

Another thought. As this appears to be gravity related, is there any way to cull your heavy math based on a distance check? Basically, a radius/radius squared check to fight the O(n^2) behavior of your loop. If you elimiated 1/2 your particles, it would run around x4 faster.

One last thing. You could thread your inner loop to multiple processors. You'd have to make a seperate version of your internals per thread to prevent data contention and locking overhead, but once each thread was complete, you could tally your mass/jerk values from each structure. I didn't see any dependencies that would prevent this, but I am no expert in this area by far :)


  • Firstly you need to profile the code. The method for this will depend on what CPU and OS you are running.

  • You might consider whether you can use floats rather than doubles.

  • If you're using gcc then make sure you're using -O2 or possibly -O3.

  • You might also want to try a good compiler, like Intel's ICC (assuming this is running on x86 ?).

  • Again assuming this is (Intel) x86, if you have a 64-bit CPU then build a 64-bit executable if you're not already - the extra registers can make a noticeable difference (around 30%).


If this is for visual effects, and your particle position/speed only need to be approximate, then you can try replacing sqrt with the first few terms of its respective Taylor series. The magnitude of the next unused term represents the error margin of your approximation.


Easy thing first: move all the "old" variables to a different array. You never access them in your main loop, so you're touching twice as much memory as you actually need (and thus getting twice as many cache misses). Here's a recent blog post on the subject: http://msinilo.pl/blog/?p=614. And of course, you could prefetch a few particles ahead, e.g. p[j+k], where k is some constant that will take some experimentation.


If you move the mass out too, you could store things like this:

struct ParticleData
{
    Vector pos, vel, acc, jerk;
};

ParticleData* currentParticles = ...
ParticleData* oldParticles = ...
real* masses = ...

then updating the old particle data from the new data becomes a single big memcpy from the current particles to the old particles.


If you're willing to make the code a bit uglier, you might be able to get better SIMD optimization by storing things in "transposed" format, e.g

struct ParticleData
{
    // data_x[0] == pos.x, data_x[1] = vel.x, data_x[2] = acc.x, data_x[3] = jerk.x
    Vector4 data_x;

    // data_y[0] == pos.y, data_y[1] = vel.y, etc.
    Vector4 data_y;

    // data_z[0] == pos.z, data_y[1] = vel.z, etc.
    Vector4 data_z;
};

where Vector4 is either one single-precision or two double-precision SIMD vectors. This format is common in ray tracing for testing multiple rays at once; it lets you do operations like dot products more efficiently (without shuffles), and it also means your memory loads can be 16-byte aligned. It definitely takes a few minutes to wrap your head around though :)

Hope that helps, let me know if you need a reference on using the transposed representation (although I'm not sure how much help it would actually be here either).


My first advice would be to look at the molecular dynamics litterature, people in this field have considered a lot of optimizations in the field of particle systems. Have a look at GROMACS for example.

With many particles, what's killing you is of course the double for loop. I don't know how accurately you need to compute the time evolution of your system of particles but if you don't need a very accurate calculation you could simply ignore the interactions between particles that are too far apart (you have to set a cut-off distance). A very efficient way to do this is the use of neighbour lists with buffer regions to update those lists only when needed.


All good stuff above. I've been doing similar things to a 2nd order (Leapfrog) integrator. The next two things I did after considering many of the improvements suggested above was start using SSE intrinsics to take advantage of vectorization and parallelize the code using a novel algorithm which avoids race conditions and takes advantage of cache locality.

SSE example:

http://bitbucket.org/ademiller/nbody/src/tip/NBody.DomainModel.Native/LeapfrogNativeIntegratorImpl.cpp

Novel cache algorithm, explanation and example code:

http://software.intel.com/en-us/articles/a-cute-technique-for-avoiding-certain-race-conditions/

http://bitbucket.org/ademiller/nbody/src/tip/NBody.DomainModel.Native.Ppl/LeapfrogNativeParallelRecursiveIntegratorImpl.cpp

You might also find the following deck I gave at Seattle Code Camp interesting:

http://www.ademiller.com/blogs/tech/2010/04/seattle-code-camp/

Your forth order integrator is more complex and would be harder to parallelize with limited gains on a two core system but I would definitely suggest checking out SSE, I got some reasonable performance improvements here.


Apart from straightforward add/subtract/divide/multiply, pow() is the only heavyweight function I see in the loop body. It's probably pretty slow. Can you precompute it or get rid of it, or replace it with something simpler?

What's real? Can it be a float?

Apart from that you'll have to turn to MMX/SSE/assembly optimisations.


Would you benefit from the famous "fast inverse square root" algorithm?

float InvSqrt(float x)
{
    union {
        float f;
        int i;
    } tmp;
    tmp.f = x;
    tmp.i = 0x5f3759df - (tmp.i >> 1);
    float y = tmp.f;
    return y * (1.5f - 0.5f * x * y * y);
}

It returns a reasonably accurate representation of 1/r**2 (the first iteration of Newton's method with a clever initial guess). It is used widely for computer graphics and game development.


Consider also pulling your multiplication of Constants::G out of the loop. If you can change the semantic meaning of the vectors stored so that they effectively store the actual value/G you can do the gravitation constant multiplacation as needed.

Anything that you can do to trim the size of the Particle structure will also help you to improve cache locality. You don't seem to be using the old* members here. If they can be removed that will potentially make a significant difference.

Consider splitting our particle struct into a pair of structs. Your first loop through the data to reset all of the acc and jerk values could be an efficient memset if you did this. You would then essentially have two arrays (or vectors) where part particle 'n' is stored at index 'n' of each of the arrays.


Yes. Try looking at the assembly output. It may yield clues as to where the compiler is doing it wrong.

Now then, always always apply algorithm optimizations first and only when no faster algorithm is available should you go piecemeal optimization by assembly. And then, do inner loops first.

You may want to profile to see if this is really the bottleneck first.


Thing I look for is branching, they tend to be performance killers.

You can use loop unrolling.

also, remember multiple with smaller parts of the problem :-

  for (size_t i = 0; i < numParticles; i++)
    {
        for (size_t j = i+1; j < numParticles; j++)
        {

is about the same as having one loop doing everything, and you can get speed ups through loop unrolling and better hitting of the cache

You could thread this to make better use of multiple cores

you have some expensive calculations that you might be able to reduce, especially if the calcs end up calculating the same thing, can use caching etc....

but really need to know where its costing you the most


You should re-use the reals and vectors that you always use. The cost of constructing a Vector or Real might be trivial.. but not if numParticles is very large, especially with your seemingly O((n^2)/2) loop.

Vector r;
Vector v;
real r2;
real v2;
Vector da;
Vector dj;
real r3i;
real mij;
real tau_est_q1;
real tau_est_q2;
for (size_t i = 0; i < numParticles; i++)
    {
        for (size_t j = i+1; j < numParticles; j++)
        {
            r = p[j].pos - p[i].pos;
            v = p[j].vel - p[i].vel;
            r2 = lengthsq(r);
            v2 = lengthsq(v);

            // Calculate inverse of |r|^3
            r3i = Constants::G * pow(r2, -1.5);

            // da = r / |r|^3
            // dj = (v / |r|^3 - 3 * (r . v) * r / |r|^5
            da = r * r3i;
            dj = (v - r * (3 * dot(r, v) / r2)) * r3i;

            // Calculate new acceleration and jerk
            p[i].acc += da * p[j].mass;
            p[i].jerk += dj * p[j].mass;
            p[j].acc -= da * p[i].mass;
            p[j].jerk -= dj * p[i].mass;

            // Collision estimation
            // Metric 1) tau = |r|^2 / |a(j) - a(i)|
            // Metric 2) tau = |r|^4 / |v|^4
            mij = p[i].mass + p[j].mass;
            tau_est_q1 = r2 / (lengthsq(da) * mij * mij);
            tau_est_q2 = (r2*r2) / (v2*v2);

            if (tau_est_q1 < tau_q)
                tau_q = tau_est_q1;
            if (tau_est_q2 < tau_q)
                tau_q = tau_est_q2;
        }
    }


You can replace any occurrence of:

a = b/c
d = e/f

with

icf = 1/(c*f)
a = bf*icf
d = ec*icf

if you know that icf isn't going to cause anything to go out of range and if your hardware can perform 3 multiplications faster than a division. It's probably not worth batching more divisions together unless you have really old hardware with really slow division.

You'll get away with fewer time steps if you use other integration schemes (eg. Runge-Kutta) but I suspect you already know that.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜