开发者

What are ways of improving build/compile time?

I am using Visual Studio, and it seems that getting rid of unused references and using statements speeds up my开发者_如何学Go build time on larger projects. Are there other known ways of speeding up build time. What about for other languages and build environments?

What is typically the bottleneck during build/compile? Disk, CPU, Memory?

What is a list of/are good references for distributed builds?


The biggest improvement we made for our large C++ project was from distributing our builds. A couple of years ago, a full build would take about half an hour, while it's now about three minutes, of which one third is link time.

We're using a proprietary build system, but IncrediBuild is working fine for a lot of people (we couldn't get it to work reliably).


Fixing your compiler warnings should help quite a bit.


Buy a faster computer


At my previous job we had big problems with compilation time and one of the strategies we used was called the Envelope pattern see here.

Basically it attempts to minimize the amount of code copied in headers by the pre-processor by minimizing header size. It did this by moving anything that wasn't public to a private friend class, here's an example.

foo.h:

class FooPrivate;
class Foo
{
public:
   Foo();
   virtual ~Foo();
   void bar();
private:
   friend class FooPrivate;
   FooPrivate *foo;
};

foo.cpp:

Foo::Foo()
{
   foo = new FooPrivate();
}

class FooPrivate
{
    int privData;
    char *morePrivData;
};

The more include files you do this with the more it adds up. It really does help your compilation time.

It does make things difficult to debug in VC6 though as I learned the hard way. There's a reason it's a previous job.


If you're using a lot of files and a lot of templated code (STL / BOOST / etc.), then Bulk or Unity builds should cut down on build and link times.

The idea of Bulk Builds to break your project down into subsections and include all the CPP files in that subsection into a single file. Unity builds take this further by having a Single CPP file that is compiled that inludes all other CPP files.

The reason this is often faster is:

1) Templates are only evaluated once per Bulk File

2) Include files are opened / processed only once per Bulk File (assuming there is a proper #ifndef FILE__FILENAME__H / #define FILE__FILENAME__H / #endif wrapper in the include file). Reducing total I/O is a good thing for compile times.

3) The linker has much less data to work with (Single Unity OBJ file or several Bulk OBJ files) and is less likely to page to virtual memory.

EDIT Adding a couple of links here on stack overflow about Unity Builds.


Be wary of broad-sweeping "consider this directory and all subdirectories for header inclusion" type settings in your project. This will cause the compiler to have to iterate every directory until the header file requested is found, and can be a very expensive operation for however many headers you include in your project.


Please read this book. It's pretty good on the topic of physical structure your project into different files with minimal rebuilds.

Unforunately it was written before templates became that important. The templates are the real time killer when it comes to C++ compilation. Especialls if you make the mistake and use smart pointers everywhere. In this case you can only constanly upgrade to the latest CPU and recent SSD drives. MSVC is already the fastests existing C++ compiler if you use precompiled headers.

What are ways of improving build/compile time?


Visual Studio supports parallel builds, which can help, but the true bottleneck is Disk IO.

In C for instance - if you generate LST files your compile will take ages.


Don't compile with debug turned on.


For C++ the mayor bottleneck is the disk I/O. Many headers include other headers back and forth, which causes a lot of files to be opened and read through for each compilation unit.

You can reach significant improvement if you move the sources into a RAM-disk. Even more if you ensure that your source files read through exactly once.

So for new projects I began to include everything into a single file I call _.cpp. It's structure is like this:

/* Standard headers */
#include <vector>
#include <cstdio>
//...

/* My global macros*/
#define MY_ARRAY_SIZE(X) (sizeof(X)/sizeof(X[0]))

// My headers
#include "foo.h"
#include "bar.h"
//...

// My modules
#include "foo.cpp"
#include "bar.cpp"

And I only compile this single file.

My headers and source files does not include anything, and use namespaces to avoid clashes with other modules.

Whenever my program misses something, I add its header and source into this module only.

This way each source file and header is read exactly once, and builds very quickly. Compile times increase only linearly as you add more files, but not quadratically. My hobby project is about 40000 loc and 500 modules but still compiles about 10-20 seconds. If I move all sources and headers into a RAM-disk compile time reduces to 3 seconds.

Disadvantage of this, that existing codebases are quite difficult to refactor to use this scheme.


For C# - Using fixed versions for your assemblies instead of auto incremental ones greatly speeds up subsequent local builds.

assemblyinfo.cs

// takes longer to update version number changes on all references to this assembly
[assembly: AssemblyVersion("1.0.*")]

// this way already built assemblies do not need to be recompiled 
// to update version number changes.
[assembly: AssemblyVersion("1.0.0.0")]


Compilation Time and brittle base class problem : I have written a blog on a way to improve compilation time in C++. Link.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜