Unit-testing – Integration Testing: Testing Service to Service

integration-testsrestunit testingweb services

I'm looking for some advice on testing strategies for service to service communication.

I have one service (service A) that makes a call to another service (B) – which is a rest API. Both services are owned by me.

I have some unit tests around the service calls and I simply mock the HTTP library so no requests are actually sent to the service. That works well for unit testing but I was wondering if it is worthwhile to add some integration tests that actual test the service calls and responses.

The problem I see is service B updates a database so any integration tests in service A will have to reset any changes they make by calling the DB directly. To me this doesn't seem ideal as now service A knows more about the implementation of service B than it should.

Are these tests valuable? When I've seen these kind of tests before they are often brittle and rely on development environments being in a good state. If this was a third party API for example I wouldn't have tests which call it directly.

I can think of two options:

  1. Write the integration tests in service A and have these tests call service B's database to reset/inset data as needed.

  2. Stick with mocks and don't add integration tests to service A. Instead add some functional tests to service B which test the various rest endpoints.

Any advice or thoughts?

Best Answer

To my experience, our tests should not be bound to dependencies which execution are out of our control.

First of all, let's narrow down the scope of the tests. As stated by question, the service under test is A, so let's focus on testing A first.

One important thing to achieve with tests is determinism. It should be possible to execute all our tests in any order, time and/or environment. Ideally these tests will reproduce real uses-cases on ideal conditions.

For us to achive determinism, we need to remove any source of undeterminism, as for example integrations with services (of any sort) that we can not control. Or services we can not bring them up to the precise state we need for each of our tests.

Some will argue that this is not an integration test because we are removing the integration. To me, the key is testing our code isolated from external inferences. By the due technique I will reproduce diferent integration behaviours like latency, timeouts, down times, etc... I have found important to remove dependencies that slow down the testing time, overall if we do CI/CD. By isolating my tests from this dependencies I can set different stagings in my testing so I can prioritize the faster ones and schedule the slowers.

The problem I see is service B updates a database so any integration tests in service A will have to reset any changes they make by calling the DB directly.

B writing in DB is almost anecdotic. The problem is testing a real service B because we are -indirectly- testing B code and the environment where B is running at!

The real danger is on the unknown conditions under which B is living at the moment of testing. These conditions, in the worst of the cases, could make our test to fail. If they fail, they do it due to issues unrelated to our code. These fails don't give us meaningful feedback about the state of the code being tested and slow us down.

As commented, writing in DB is anecdotic, there are many more things that could go wrong.

  • Service B has no test environment.
  • Service B has a test environment but has been deployed a new version which includes breaking changes.
  • Service B is buggy.
  • Service B data storage is unavailable.
  • Service B is unavailable
  • Service B responds with corrupt data.
  • Service B is under test, and the data is changing over time.

You should be wondering why non-deterministic tests are dangerous? I suggest reading Fowler's blog about Erradicating non-determinism in tests and check out the following question too. Doc's answer summarises the subject very well.

Examples of non-deterministic tests are Flaky tests. Flaky tests are tests that fail due to undetermined circumstances. These tests fail now and then and we don't know why. We can not reproduce the issue.

A test suite with flaky tests can become a victim of what Diana Vaughan calls normalization of deviance - the idea that over time we can become so accustomed to things being wrong that we start to accept them as being normal and not a problem.

-Building Microservices- by Sam Newman

The normalization of deviance is the seed of the evil.

Any advice or thoughts?

When testing integrations, neither the data nor the behaviour of the external service should worry you. At least not yet. 1

What should worry you is to test the correct consumption of interface (API) and the proper handling of the feedback (error handling, deserialization, mappings, etc). In other words, the contract.

Lately, I have started to work with the concept of Test Doubles and Consumer Driven Contracts tests with very positive results.

It's true that they require additional efforts addressed to build and maintain these tests. That's our case. However, we have reduced the building, the testing and the deployment time significantly and we get faster and more meaningful feedback from CI.

In line with the above writing and @Justin's answer, you might be interested in tools like Mountebank.


1: There are a place for tests addressed to validate the real behaviour of external services. They can be placed out of the building pipeline. They might or might not be essential for a green deployment. That depends on whether you can or not circumvent the issues raised by the service. It's almost a political question rather than technical.