开发者

Is using --start-group and --end-group when linking faster than creating a static library?

If one builds static libraries in one's build scripts and one wants to use those static libraries in linking the final executable, the order one mentions the .a files is important:

g++ main.o hw.a gui.a -o executable

If gui.a uses something defined in hw.a the link will fail, because at the time hw.a is processed, the linker doesn't yet know that the definition is needed later, and doesn't include it in the being.generated executable. Manually fiddling around with the linker line is not practical, so a solutio开发者_运维知识库n is to use --start-group and --end-group which makes the linker run twice through the libraries until no undefined symbols are found anymore.

g++ main.o -Wl,--start-group hw.a gui.a -Wl,--end-group -o executable

However the GNU ld manual says

Using this option has a significant performance cost. It is best to use it only when there are unavoidable circular references between two or more archives.

So I thought that it may be better to take all .a files and put them together into one .a file with an index (-s option of GNU ar) which says in what order the files need to be linked. Then one gives only that one .a file to g++.

But I wonder whether that's faster or slower than using the group commands. And are there any problems with that approach? I also wonder, is there better way to solve these interdependency problems?


EDIT: I've written a program that takes a list of .a files and generates a merged .a file. Works with the GNU common ar format. Packing together all static libs of LLVM works like this

$ ./arcat -o combined.a ~/usr/llvm/lib/libLLVM*.a

I compared the speed against unpacking all .a files manually and then putting them into a new .a file using ar, recomputing the index. Using my arcat tool, I get consistent runtimes around 500ms. Using the manual way, time varies greatly, and takes around 2s. So I think it's worth it.

Code is here. I put it into the public domain :)


You can determine the order using the lorder and tsort utilities, for example

libs='/usr/lib/libncurses.a /usr/lib/libedit.a'
libs_ordered=$(lorder $libs | tsort)

resulting in /usr/lib/libedit.a /usr/lib/libncurses.a because libedit depends on libncurses.

This is probably only a benefit above --start-group if you do not run lorder and tsort again for each link command. Also, it does not allow mutual/cyclic dependencies like --start-group does.


Is there a third option where you just build a single library to begin with? I had a similar problem and I eventually decided to go with the third option.

In my experience, group is slower than just unifying the .a files. You can extract all files from the archive, then create a new .a file from from the smaller files

However, you have to be careful about a circumstance where both files happen to contain the same definition (you can explicitly check for this by using nm to see what definitions are contained in each library)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜