Unit-testing – How to detect dependency problems with unit tests when you use mock objects

mockingtestingunit testing

You have a class X and you write some unit tests that verify behaviour X1.
There's also class A which takes X as a dependency.

When you write unit tests for A, you mock X. In other words, while unit testing A, you set (postulate) the behaviour of X's mock to be X1.
Time goes by, people do use your system, needs change, X evolves: you modify X to show behaviour X2. Obviously, unit tests for X will fail and you will need to adapt them.

But what with A? Unit tests for A will not fail when X's behaviour is modified (due to the mocking of X). How to detect that A's outcome will be different when run with the "real" (modified) X?

I'm expecting answers along the line of: "That's not the purpose of unit testing", but what value does unit testing have then? Does it really only tell you that when all tests pass, you haven't introduced a breaking change?
And when some class's behaviour changes (willingly or unwillingly), how can you detect (preferably in an automated way) all the consequences? Shouldn't we focus more on integration testing?

Best Answer

When you write unit tests for A, you mock X

Do you? I don't, unless I absolutely have to. I have to if:

  1. X is slow, or
  2. X has side effects

If neither of these apply, then my unit tests of A will test X too. Doing anything else would be taking isolating tests to an illogical extreme.

If you have parts of your code using mocks of other parts of your code, then I'd agree: what is the point of such unit tests? So don't do this. Let those tests use the real dependencies as they form far more valuable tests that way.

And if some folk get upset with you calling these tests, "unit tests", then just call them "automated tests" and get on with writing good automated tests.