C++ – Speed up compile time with SSD

ccompilationsolid-state-drivevisual studio 2010

I want to try to speed up my compile-time of our C++ projects. They have about 3M lines of code.

Of course, I don't need to always compile every project, but sometimes there are lot of source files modified by others, and I need to recompile all of them (for example, when someone updates an ASN.1 source file).

I've measured that compiling a mid-project (that does not involves all the source files) takes about three minutes. I know that's not too much, but sometimes it's really boring waiting for a compile..

I've tried to move the source code to an SSD (an old OCZ Vertex 3 60 GB) that, benchmarked, it's from 5 to 60 times faster than the HDD (especially in random reading/writing). Anyway, the compile-time is almost the same (maybe 2-3 seconds faster, but it should be a chance).

Maybe moving the Visual Studio bin to SSD would grant additional increment in performance?

Just to complete the question: I've a W3520 Xeon @2.67 GHz and 12 GB of DDR3 ECC.

Best Answer

This all greatly depends on your build environment and other setup. For example, on my main compile server, I have 96 GiB of RAM and 16 cores. The HDD is rather slow, but that doesn't really matter as about everything is cached in RAM.

On my desktop (where I also compile sometimes) I only have 8 Gib of RAM, and six cores. Doing the same parallel build there could be greatly sped up, because six compilers running in parallel eat up enough memory for the SSD speed difference being very noticeable.

There are many things that influence the build times, including the ratio of CPU to I/O "boundness". In my experience (GCC on Linux) they include:

  • Complexity of code. Lots of metatemplates make it use more CPU time, more C-like code might make the I/O of generated objects (more) dominant
  • Compiler settings for temporary files, like -pipe for GCC.
  • Optimization being used. Usually, the more optmization, the more the CPU work dominates.
  • Parallel builds. Compiling a single file at a time will likely never produce enough I/O to get today's slowest harddisk to any limit. Compiling with eight cores (or more) at once however might.
  • OS/filesystem being used. It seems that some filesystems in the past have choked on the access pattern for many files built in parallel, essentially putting the I/O bottleneck into the filesystem code, rather than the underlying hardware.
  • Available RAM for buffering. The more aggressively an OS can buffer your I/O, the less important the HDD speed gets. This is why sometimes a make -j6 can be a slower than a make -j4 despite having enough idle cores.

To make it short: It depends on enough things to make any "yes, it will help you" or "no, it will help you not" pure speculation, so if you have the possibility to try it out, do it. But don't spend too much time on it, for every hour you try to cut your compile times into half, try to estimate how often you (or your coworkers if you have any) could have rebuilt the project, and how that relates to the possible time saved.