开发者

What's c++ compiling performance bottle neck?

When I do a fresh compilation for my project, which includes 10+ open-source libs. It takes about 40mins. (on normal hardware)

Question: where really are my bottle necks at? hard-drive seeking or CPU Ghz? I don't think multi-core would help much correct?

--Edit 1--

my normal hardware = i3 oc to 4.0Ghz, 8GB 1600Mhz DDR3 and a 2tb Western digital

--Edit 2--

my code = 10%, libs = 90%, I know I dont have to build everything everytime, but I would like to find out how to improve compiling performance, so w开发者_如何学Chen buying new pc for developer, I would make a smarter choice.

--Edit 3--

cc = Visual Studio (damn)


You're wrong, multi-core brings a tremendous speed-up, right up until the moment your hard-drive gives up actually :)

Proof by example: distcc, which brings distributed builds (My build use about 20 cores in parallel, it's actually bound by the local preprocessing phase).

As for the real bottleneck, it's got something to do with the #include mechanism. Languages with modules are compiled much faster...


40 minutes to build is most likely (In fact at 40 minutes I'd go as far as saying near definitely) caused by poor #include usage. You are including things that don't need to be included they may only need forward declarations.

Tidying up your code will make HUGE differences. I know its a lot of work but you will be surprised. At one company I worked at a library that took over 30 minutes to build was optimised down to a 3 minute build in just over a week by making sure that all #includes were needed and by adding forward declarations instead of #includeing. This library was significantly over a million lines of code to give you an idea ...


Since VS 2010, VS can optionally use multiple cores when compiling a single project. It can also compile multiple projects in parallel. However, the parallel speed-up doesn't seem to be significant in my experience: e.g. Xcode is much better at doing parallel builds.

Fortunately you can don't have to rebuild the open source libs every time, right? You could build them once, store the .lib files in version control, and use those for subsequent builds.

Have you tried precompiled header files for your own code? This can yield a massive speedup.


multicore compilation will help, tremendously in most cases.

you'll have to analyze your projects, and the time spent in each phase in order to determine where the bottlenecks are.

in typical large c++ projects, the process is typically CPU bound, then disk bound. if it's the other way around, you're probably in header dependency hell.

there's actually a ton of ways to reduce compile times and dependency in your projects. the best singular reference i know of is by Lakos:

http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620/ref=sr_1_1?ie=UTF8&qid=1296569079&sr=8-1

it's one of the most important/practical c++ books i've read.

you can typically reduce compile times dramatically (e.g, over 40x faster if you take it very seriously), but may take a lot of work/time to correct existing codebases.


When you compile from scratch, yes, it will take longer. Use the 40 year-old technology of make, which VS includes as project management, to compile only what needs to be compiled after the first run.

That said, C++'s translation unit model plus extensive use of templates can be a significant practical problem.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜