开发者

Multiplying vector by constant using SSE

I have some code that operates on 4D vectors and I'm currently trying to convert it to use SSE. I'm using both clang and gcc on 64b linux.

Operating only on vectors is all fine -gr开发者_StackOverflow社区asped that. But now comes a part where i have to multiply an entire vector by a single constant - Something like this:

float y[4];
float a1 =   25.0/216.0;  

for(j=0; j<4; j++){  
    y[j] = a1 * x[j];  
} 

to something like this:

float4 y;
float a1 =   25.0/216.0;  

y = a1 * x;  

where:

typedef double v4sf __attribute__ ((vector_size(4*sizeof(float)))); 

typedef union float4{
    v4sf v;
    float x,y,z,w;
} float4;

This of course will not work because I'm trying to do a multiplication of incompatiple data types.

Now, i could do something like:

float4 a1 = (v4sf){25.0/216.0, 25.0/216.0, 25.0/216.0, 25.0/216.0} but just makes me feel silly, even if if i write a macro to do this. Also, I'm pretty certain that will not result in very efficient code.

Googling this brought no clear answers ( see Load constant floats into SSE registers).

So what is the best way to multiply an entire vector by the same constant?


Just use intrinsics and let the compiler take care of it, e.g.

__m128 vb = _mm_set_ps(1.0f, 2.0f, 3.0f, 4.0f); // vb = { 1.0, 2.0, 3.0, 4.0 }
__m128 va = _mm_set1_ps(25.0f / 216.0f); // va = { 25.0f / 216.0f, 25.0f / 216.0f, 25.0f / 216.0f, 25.0f / 216.0f }
__m128 vc = _mm_mul_ps(va, vb); // vc = va * vb

If you look at the generated code it should be quite efficient - the 25.0f / 16.0f value will be calculated at compile time and _mm_set1_ps generates usually generates reasonably efficient code for splatting a vector.

Note also that you normally only initialise a constant vector such as va just once, prior to entering a loop where you will be doing most of the actual work, so it tends not to be performance-critical.


There is no reason one should have to use intrinsics for this. The OP just wants to do a broadcast. That's as basic a SIMD operation as SIMD addition. Any decent SIMD library/extension has to support broadcasts. Agner Fog's vector class certainly does, OpenCL does, the GCC documention clearly shows that it does.

a = b + 1;    /* a = b + {1,1,1,1}; */
a = 2 * b;    /* a = {2,2,2,2} * b; */

The following code compiles just fine

#include <stdio.h>
int main() {     
    typedef float float4 __attribute__ ((vector_size (16)));

    float4 x = {1,2,3,4};
    float4 y = (25.0f/216.0f)*x;
    printf("%f %f %f %f\n", y[0], y[1], y[2], y[3]);
    //0.115741 0.231481 0.347222 0.462963
}

You can see the results at http://coliru.stacked-crooked.com/a/de79cca2fb5d4b11

Compare that code to the intrinsic code and it's clear which one is more readable. Not only is it more readable it's easier to port to e.g. ARM Neon. It also looks very similar to OpenCL C code.


This perhaps might not be the best way but this was the approach I took when I was dabbling around in SSE.

float4 scale(const float s, const float4 a)
{
  v4sf sv = { s, s, s, 0.0f };
  float4 r = { .v = __builtin_ia32_mulps(sv, a.v) };
  return r;
}

float4 y;
float a1;

y = scale(a1, y);
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜