开发者

Linking phase in distcc

Is there any particular reason that the linking phase when building a project with 开发者_Python百科distcc is done locally rather than sent off to other computers to be done like compiling is? Reading the distcc whitepages didn't give a clear answer but I'm guessing that the time spent linking object files is not very significant compared to compilation. Any thoughts?


The way that distcc works is by locally preprocessing the input files until a single file translation unit is created. That file is then sent over the network and compiled. At that stage the remote distcc server only needs a compiler, it does not even need the header files for the project. The output of the compilation is then moved back to the client and stored locally as an object file. Note that this means that not only linking, but also preprocessing is performed locally. That division of work is common to other build tools, like ccache (preprocessing is always performed, then it tries to resolve the input with previously cached results and if succeeds returns the binary without recompiling).

If you were to implement a distributed linker, you would have to either ensure that all hosts in the network have the exact same configuration, or else you would have to send all required inputs for the operation in one batch. That would imply that distributed compilation would produce a set of object files, and all those object files would have to be pushed over the network for a remote system to link and return the linked file. Note that this might require system libraries that a referred and present in the linker path, but not present in the linker command line, so a 'pre-link' would have to determine what set of libraries are actually required to be sent. Even if possible this would require the local system to guess/calculate all real dependencies and send them with a great impact in network traffic and might actually slow down the process, as the cost of sending might be greater than the cost of linking --if the cost of getting the dependencies is not itself almost as expensive as linking.

The project I am currently working on has a single statically linked executable of over 100M. The static libraries range in size but if a distributed system would consider that the final executable was to be linked remotely it would require probably three to five times as much network traffic as the final executable (templates, inlines... all these appear in all translation units that include them, so there would be multiple copies flying around the network).


Linking, almost by definition, requires that you have all of the object files in one place. Since distcc drops the object files on the computer that invoked distcc in the first place, the local machine is the best place to perform the link as the objects are already present.

In addition, remote linking would become particularly hairy once you start throwing libraries into the mix. If the remote machine linked your program against its local version of a library, you've opened up potential problems when the linked binary is returned to the local machine.


The reason that compliation can be sent to other machines is that each source file is compiled independently of others. In simple terms, for each .c input file there is a .o output file from the compilation step. This can be done on as many different machines as you like.

On the other hand, linking collects all the .o files and creates one output binary. There isn't really much to distribute there.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜