Continuous Integration – Running Unit Tests in Version Control Hooks

continuous integrationhooksversion control

From the technical point of view it is possible to add some pre/post push hooks which will run unit tests before allowing some specific commit to be merged to remote default branch.

My question is – is it better to keep unit tests in build pipeline (thus, introducing broken commits to repo) or it's better just not to allow "bad" commits to happen.

I do realize that I'm not limited with this two options. For instance, I can allow all commits to branched and tests before pushing merge commit to repo. But if you have to choose between exactly this two solutions, which one you'll choose and for what exactly reasons?

Best Answer

No, it's not, for two reasons:

Speed

Commits should be fast. A commit which takes 500 ms., for example, is too slow and will encourage developers to commit more sparingly. Given that on any project larger than a Hello World, you'll have dozens or hundreds of tests, it will take too much time to run them during pre-commit.

Of course, things get worse for larger projects with thousands of tests which run for minutes on a distributed architecture, or weeks or months on a single machine.

The worst part is that there is not much you can do to make it faster. Small Python projects which have, say, hundred unit tests, take at least a second to run on an average server, but often much longer. For a C# application, it will average four-five seconds, because of the compile time.

From that point, you can either pay extra $10 000 for a better server which will reduce the time, but not by much, or run tests on multiple servers, which will only slow things down.

Both pay well when you have thousands of tests (as well as functional, system and integration tests), allowing to run them in a matter of minutes instead of weeks, but this won't help you for small scale projects.

What you can do, instead, is to:

  • Encourage developers to run tests strongly related to the code they modified locally before doing a commit. They possibly can't run thousands of unit tests, but they can run five-ten of them.

    Make sure that finding relevant tests and running them is actually easy (and fast). Visual Studio, for example, is able to detect which tests may be affected by changes done since the last run. Other IDEs/platforms/languages/frameworks may have similar functionality.

  • Keep the commit as fast as possible. Enforcing style rules is OK, because often, it's the only place to do it, and because such checks are often amazingly fast. Doing static analysis is OK as soon as it stays fast, which is rarely the case. Running unit tests is not OK.

  • Run unit tests on your Continuous Integration server.

  • Make sure developers are informed automatically when they broke the build (or when unit tests failed, which is practically the same thing if you consider a compiler as a tool which checks some of the possible mistakes you can introduce into your code).

    For example, going to a web page to check the last builds is not a solution. They should be informed automatically. Showing a popup or sending an SMS are two examples of how they may be informed.

  • Make sure developers understand that breaking the build (or failing regression tests) is not OK, and that as soon as it happens, their top priority is to fix it. It doesn't matter whether they are working on a high-priority feature that their boss asked to ship for tomorrow: they failed the build, they should fix it.

Security

The server which hosts the repository shouldn't run custom code, such as unit tests, especially for security reasons. Those reasons were already explained in CI runner on same server of GitLab?

If, on the other hand, your idea is to launch a process on the build server from the pre-commit hook, then it will slow down even more the commits.