开发者

Multiply Large Complex Number Vector by Scalar efficiently C++

I'm currently trying to most efficiently do an in-place multiplication of an array of complex numbers (memory aligned the same way the std::complex would be but currently using our own ADT) by an array of scalar values that is the same size as the complex number array.

The algorithm is already parallelized, i.e. the calling object splits the work up into threads. This calculation is done on arrays in the 100s of millions - so, it can take some time to complete. CUDA is not a solution for this pro开发者_Python百科duct, although I wish it was. I do have access to boost and thus have some potential to use BLAS/uBLAS.

I'm thinking, however, that SIMD might yield much better results, but I'm not familiar enough with how to do this with complex numbers. The code I have now is as follows (remember this is chunked up into threads which correspond to the number of cores on the target machine). The target machine is also unknown. So, a generic approach is probably best.

void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
    for (register int idx = start; idx < end; ++idx)
    {
        values[idx].real *= scalar[idx];
        values[idx].imag *= scalar[idx];
    }
}

fcomplex is defined as follows:

struct fcomplex
{
    float real;
    float imag;
};

I've tried manually unrolling the loop, as my finally loop count will always be a power of 2, but the compiler is already doing that for me (I've unrolled as far as 32). I've tried a const float reference to the scalar - in thinking I'd save one access - and that proved to be equal to the what the compiler was already doing. I've tried STL and transform, which game close results, but still worse. I've also tried casting to std::complex and allow it to use the overloaded operator for scalar * complex for the multiplication but this ultimately produced the same results.

So, anyone with any ideas? Much appreciation is given for your time in considering this! Target platform is Windows. I'm using Visual Studio 2008. Product cannot contain GPL code as well! Thanks so much.


You can do this fairly easily with SSE, e.g.

void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
    for (int idx = start; idx < end; idx += 2)
    {
        __m128 vc = _mm_load_ps((float *)&values[idx]);
        __m128 vk = _mm_set_ps(scalar[idx + 1], scalar[idx + 1], scalar[idx], scalar[idx]);
        vc = _mm_mul_ps(vc, vk);
        _mm_store_ps((float *)&values[idx], vc);
    }
}

Note that values and scalar need to be 16 byte aligned.

Or you could just use the Intel ICC compiler and let it do the hard work for you.


UPDATE

Here is an improved version which unrolls the loop by a factor of 2 and uses a single load instruction to get 4 scalar values which are then unpacked into two vectors:

void cmult_scalar_inplace(fcomplex *values, const int start, const int end, const float *scalar)
{
    for (int idx = start; idx < end; idx += 4)
    {
        __m128 vc0 = _mm_load_ps((float *)&values[idx]);
        __m128 vc1 = _mm_load_ps((float *)&values[idx + 2]);
        __m128 vk = _mm_load_ps(&scalar[idx]);
        __m128 vk0 = _mm_shuffle_ps(vk, vk, 0x50);
        __m128 vk1 = _mm_shuffle_ps(vk, vk, 0xfa);
        vc0 = _mm_mul_ps(vc0, vk0);
        vc1 = _mm_mul_ps(vc1, vk1);
        _mm_store_ps((float *)&values[idx], vc0);
        _mm_store_ps((float *)&values[idx + 2], vc1);
    }
}


Your best bet will be to use an optimised BLAS which will take advantage of whatever is available on your target platform.


One problem I see is that in the function it's hard for the compiler to understand that the scalar pointer is not indeed pointing in the middle of the complex array (scalar could in theory be pointing to the complex or real part of a complex). This actually forces the order of evaluation.

Another problem I see is that here the computation is so simple that other factors will influence the raw speed, therefore if you really care about performance the only solution is in my opinion to implement several variations and test them at runtime on the user machine to discover what is the fastest.

What I'd consider is using different unrolling sizes, and also playing with the alignment of scalar and values (the memory access pattern can have a big influence of caching effects).

For the problem of the unwanted serialization an option is to see what is the generated code for something like

float r0 = values[i].real, i0 = values[i].imag, s0 = scalar[i];
float r1 = values[i+1].real, i1 = values[i+1].imag, s1 = scalar[i+1];
float r2 = values[i+2].real, i2 = values[i+2].imag, s2 = scalar[i+2];
values[i].real = r0*s0; values[i].imag = i0*s0;
values[i+1].real = r1*s1; values[i+1].imag = i1*s1;
values[i+2].real = r2*s2; values[i+2].imag = i2*s2;

because here the optimizer has in theory a little bit more freedom.


Do you have access to Intel's Integrated Performance Primitives? Integrated Performance Primitives They have a number of functions that handle cases like this with pretty decent performance. You might have some success with your particular problem, but I would not be surprised if your compiler already does a decent job of optimizing the code.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜