Function call cost
The application I am dealing with right now uses some brute-force numerical algorithm that calls many tiny functions billions of times. I was wanderi开发者_Python百科ng how much the performance can be improved by eliminating function calls using inclining and static polymorphism.
What is the cost of calling a function relative to calling non-inline and non-intrinsic function in the following situations:
1) function call via function pointer
2) virtual function call
I know that it is hard to measure, but a very rough estimate would do it.
Thank you!
To make a member function call compiler needs to:
Fetch address of function -> Call function
To call a virtual function compiler needs to:
Fetch address of vptr -> Fetch address of the function -> Call function
Note: That virtual mechanism is compiler implementation detail, So the implementation might differ for different compilers, there may not even be a vptr
or vtable
for that matter. Having said So Usually, compilers implement it with vptr
and vtable
and then above holds true.
So there is some overhead for sure(One additional Fetch
), To know precisely how much it impacts, you will have to profile your source code there is no simpler way.
It depends on your target architecture and your compiler, but one thing you can do is write a small test and check the assembly generated.
I did one to do the test:
// test.h
#ifndef FOO_H
#define FOO_H
void bar();
class A {
public:
virtual ~A();
virtual void foo();
};
#endif
// main.cpp
#include "test.h"
void doFunctionPointerCall(void (*func)()) {
func();
}
void doVirtualCall(A *a) {
a->foo();
}
int main() {
doFunctionPointerCall(bar);
A a;
doVirtualCall(&a);
return 0;
}
Note that you don't even need to write test.cpp, since you just need to check the assembly for main.cpp.
To see the compiler assembly output, with gcc use the flag -S:
gcc main.cpp -S -O3
It will create a file main.s, with the assembly output. Now we can see what gcc generated to the calls.
doFunctionPointerCall:
.globl _Z21doFunctionPointerCallPFvvE
.type _Z21doFunctionPointerCallPFvvE, @function
_Z21doFunctionPointerCallPFvvE:
.LFB0:
.cfi_startproc
jmp *%rdi
.cfi_endproc
.LFE0:
.size _Z21doFunctionPointerCallPFvvE, .-_Z21doFunctionPointerCallPFvvE
doVirtualCall:
.globl _Z13doVirtualCallP1A
.type _Z13doVirtualCallP1A, @function
_Z13doVirtualCallP1A:
.LFB1:
.cfi_startproc
movq (%rdi), %rax
movq 16(%rax), %rax
jmp *%rax
.cfi_endproc
.LFE1:
.size _Z13doVirtualCallP1A, .-_Z13doVirtualCallP1A
Note here I'm using a x86_64, that the assembly will change for other achitectures.
Looking to the assembly, it looks like it is using two extra movq for the virtual call, it probably is some offset in the vtable. Note that in a real code, it would need to save some registers (be it function pointer or virtual call), but the virtual call would still need two extra movq over function pointer.
Just use a profiler like AMD's codeanalyst (using IBS and TBS), else you can go the more 'hardcore' route and give Agner Fog's optimization manuals a read (they will help both for precision instruction timings and optimizing your code): http://www.agner.org/optimize/
Function calls are a significant overhead if the functions are small. The CALL and RETURN while optimized on modern CPUs will still be noticeable when many many calls are made. Also the small functions could be spread across memory so the CALL/RETURN may also induce cache misses and excessive paging.
//code
int Add(int a, int b) { return a + b; }
int main() {
Add(1, Add(2, 3));
...
}
// NON-inline x86 ASM
Add:
MOV eax, [esp+4] // 1st argument a
ADD eax, [esp+8] // 2nd argument b
RET 8 // return and fix stack 2 args * 4 bytes each
// eax is the returned value
Main:
PUSH 3
PUSH 2
CALL [Add]
PUSH eax
PUSH 1
CALL [Add]
...
// INLINE x86 ASM
Main:
MOV eax, 3
ADD eax, 2
ADD eax, 1
...
If optimization is your goal and you're calling many small functions, it's always best to inline them. Sorry, I don't care for the ugly ASM syntax used by c/c++ compilers.
精彩评论