Unit-testing – Should one test for algorithmic complexity? If so, how

performancetestingunit testing

Let's say I am implementing something simple like searching a sorted list/array. The function (in c#) would look similar to:

static int FindIndex(int[] sortedList, int i);

I could implement and test this in terms of functionality, but for obvious reasons I would usually prefer a binary search over a linear search or something intentionally stupid.

So my question is: Should we attempt to write tests that guarantee performance in terms of algorithmic complexity and if so, how?

I have started making arguments on both sides of the "should you" part of this question, but I'd like to see what people say without my arguments to prompt them.

In terms of "how", that gets very interesting:) You could see parameterizing the comparison operator and having a test whose comparison operator counts comparisons or something like that. But just because you can doesn't mean you should…

Has anyone else considered this (probably)? Thanks.

Best Answer

Algorithmic complexity is a theoretical construct and as such it's best "tested" with a pencil and paper. It can't be usefully tested mechanically.

Absolute performance can be tested mechanically and can make useful unit tests. If performance matters, then you can specify a threshold: "this query should take no more than 50ms to run on 106 numbers, and no more than 60ms on 107 numbers." That you can build a unit test for.

The end user doesn't care whether your algorithm is linear or logarithmic; they care whether their UI still responds instantly even when they've got a year's worth of data in the app.