开发者

Optimised assembly equality routine

I'm trying to write a (very) short assembly routine which tests for equality of two dwords and returns a boolean value (1 = true, 0 = false). So far I've come up with three methods, one of which uses LAHF which apparently开发者_运维问答 isn't supported on some x86_64 processors, so that one is out unfortunately of the question.

Version one is:

    mov eax, [esp + 8]
    cmp b, [esp + 4]
    mov eax, 1
    jnz jpt 
    mov eax, 0
jpt:    ret

and version two is:

    mov eax, [ebp + 8]
    cmp b, [ebp + 4]
    pushf       ; Get lowest word of the flags register
    pop ax      
    and eax, 0x0040 ; Extract the zero flag
    shr eax, 6  ; eax is now true(1) if arg1 == arg2    
    ret

Version one has an extra branch instruction, but version two has an extra push and an extra pop instruction. Which one would you expect to be fastest and why? Is this dependent of if the branch would be taken/predicted or not?


Both are version are bad. A random branch takes ages to execute, because it can't be predicted and lahf is just a no no because of a partial register write. But of course, writing a test for equality in assembler is complete nonsense anyway, because the function overhead will be a multiple of the equivalent instructions inline, so here I go:

mov eax, [ebp + 8]
cmp eax, [ebp + 4]
setz al                ;set al to 1 if equal
movzx eax,al         ;convert to dword
ret


I have found these bottlenecks before in applications I needed to optimize and they are a sure indication that you've hit a wall and can't really optimize any further.

The best course of actions would be to choose a different algorithm or data layout, one that fits the platform and access patterns better than the one you currently have. This is probably the single most important thing you can do.

However, due to deadlines or other constraints that sometimes isn't possible either and so you'll need to be creative about it and that would probably imply testing multiple elements at a time using SIMD operations (eg. use the _mm_cmpeq_epi32 intrinsic to compare 4 elements). If you're going to branch on that, you might compare 16 elements, bitwise or the masks together and branch on that (then select the correct data inside the branch).

That is primarily of benefit on platforms where branches are very costly and on IA-32/64 that isn't the case (eg. branches are cheap).

Also be aware that due to the Out-of-order Execution (OOE) Intel platforms then to use; it could very well be that the profiler you're using is reporting a stall on a more-or-less random location because it just so happens that the processor needs to wait for data to be read from the cache or RAM.

If you happen to be in that situation, make sure that you optimize your algorithm to be more cache-friendly (eg. figure out how many items fit in a cache line, reduce the size the data structures etc.)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜