Java Integration Testing – How Much is Needed?

integration-testingjavatest-automationtestingunit testing

A recent debate within my team made me wonder. The basic topic is that how much and what shall we cover with functional/integration tests (sure, they are not the same but the example is dummy where it doesn't matter).

Let's say you have a "controller" class something like:

public class SomeController {
    @Autowired Validator val;
    @Autowired DataAccess da;
    @Autowired SomeTransformer tr;
    @Autowired Calculator calc;

    public boolean doCheck(Input input) {
        if (val.validate(input)) {
             return false;
        }

        List<Stuff> stuffs = da.loadStuffs(input);
        if (stuffs.isEmpty()) {
             return false;
        }

        BusinessStuff businessStuff = tr.transform(stuffs);
        if (null == businessStuff) {
            return false;
        }

       return calc.check(businessStuff);
    }
}

We need a lot of unit testing for sure (e.g., if validation fails, or no data in DB, …), that's out of question.

Our main issue and on what we cannot agree is that how much integration tests shall cover it 🙂

I'm on the side that we shall aim for less integration tests (test pyramid). What I would cover from this is only a single happy-unhappy path where the execution returns from the last line, just to see if I put these stuff together it won't blow up.

The problem is that it is not that easy to tell why did the test result in false, and that makes some of the guys feeling uneasy about it (e.g., if we simply check only the return value, it is hidden that the test is green because someone changed the validation and it returns false). Sure, yeah, we can cover all cases but that would be a heavy overkill imho.

Does anyone has a good rule of thumb for this kind of issues? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?

Thanks a lot in advance!

PS: Sry for the ugly example but it's quite hard to translate a specific code part to an example. Yeah, one can argue about throwing exceptions/using a different return type/etc. but our hand is more or less bound because of external dependencies.

PS2: I moved this topic from SO here (original question, marked on hold)

Best Answer

There is a school of thought that seriously questions the value of integration tests.

I suspect you'll get a broad spectrum of answers here, but to my mind you should only use them when they deliver clear value.

Firstly, you should be writing as many unit tests as you can, because a) they're cheap and b) they may be the only tests that run on the build server (should you have one).

It is tempting to write client side tests to verify the database or some other layer but these should really happen at the appropriate level if possible rather than contriving an integration test. This of course can require some groundwork in the form of interfaces etc to get mocking and the like working.

Consider also the scope of your testing. If you're simply writing an integration test to cover off what happens anyway when your suite is run or duplicating a smoke test, then this has limited value. Also, think about whether more logging in pertinent places would be a better option rather than getting an obscure message from a dozen interconnected components and having to track it thru.

So assuming you've decided to write some, you need to think about when they'll run. If they can catch some nasty issue taking place, then that is great, but that becomes pretty pointless if developers never remember to run them.

Finally, integration tests are an utterly unsuitable replacement for smoke testing and user testing. If you bother with them at all they should form a small part of a well designed testing process.