Tools to diagnose slow compile/build reasons
In a number of situations as a programmer, I've found that my compile times are slower than I would like, and I want to understand the reason and fix them. Particular language (yes, I'm using C/C++) tricks have already been discussed, and we apply many of them. I've also seen this question and realize it's related. What I'm more interested in is what tools people use to diagnose hardware/system bottlenecks in the build process. Is there a standard way to prove "Disk read/writes are too slow for our builds - we need SSD's!" or "The anti-virus settings are killing our build开发者_JAVA百科 times!", etc...?
Resources I've found, none directly related to compiling performance diagnosis:
- A TechNet article about using PerfMon (Quite good, close to what I'd like)
- This IBM link detailing some PerfMon information, but it's not specific to compiling and appears somewhat out of date.
- A webpage specifically describing diagnosis of avg disk queue length
Currently, diagnosing a slow build is very much an art, and my tools of choice are:
- PerfMon
- Process Explorer
- Process Monitor
- Push hard enough to get a machine to "just try it". (Basically, trial and error.)
What do others do to diagnose system-level build performance bottlenecks? Can we come up with a list of PerfMon or Process Explorer statistics to watch for, with thresholds for whats "acceptable" on a modern machine?
PerfMon:
- CPU -> % of processor time
- MEMORY -> Page/sec
- DISK -> Avg. disk queue length
Process Explorer:
- CPU -> CPU
- DISK -> I/O Delta Total
- MEMORY -> Page Faults
I resolved a "too slow build" time issue with Eclipse and Spring recently. For me the solution was to use the Vista Resource Monitor (which identified CPU spiking but not consistently high) and quite a bit of disk activity. I then used Procmon from Sysinternals to identify exactly which files were being heavily accessed.
Part of our build process also involves checking external Maven (binary file) repositories for updates every build. I disabled that check (which also gives me perfect control over when I update those dependencies). If you have resources external to the build machine, benchmark how long it takes to access them (source control, maven, etc.).
Since I'm stuck on 32-bit Vista for now, I decided to try creating a Ramdisk with the 700MB of non-addressable memory (PC has 4GB, Vista only exposes 3.3GB) and place the heavily accessed files as identified by Procmon on the Ramdisk using a nice trick of creating drive junctions to make that move transparent to my IDE. For details see here.
I have used filemon to see the header files that a C++ build was most often opening then used:
- “#ifndef checks” so header files are only included once
- Precompiled headers
- Combined some small header files
- Reduce the number of header files included by other header files by tidying up the code.
However these days I would start with a RamDisk and or SSD, but opening lot of header files still uses lots of CPU time.
精彩评论