Microservices – How to Structure in Your Repository

Architecturemicroservicesrepositoryrepository-pattern

I am assigned to a project where we have about 20 micro-services. Each of them is in a separate repository without any references to any other, apart from one Nuget package where we maintain some generic code like math functions. Each service reference the others by endpoints.

The advantage of this is:

  • Each service is highly independent. (In reality this point is up for discussion, as a change to the API of one service is likely to effect multiple others)

  • Best practice – according to people I have talked to

The disadvantage are:

  • No code re-use.

  • The some DTO objects are defined multiple times (maybe up to 10ish)

  • Each ServiceCommunication helper class that wraps the endpoints of a service for ease of use are duplicated multiple times, once for each repo.

  • API changes are hard to keep track of, often we see the failure in Test/Production

I think the following is a better way to structure the project:
One repo.
Each micro-service provides a Server.Communication helper class that wraps the endpoinds and a selection Server.Dto types which the Server.Communication class returns from its API calls. If an other service whishes to use it, it will include this.

I hope I explained the problem well enough. Is this a better solution that will address some of my issues or will I end up creating unforeseen problems?

Best Answer

No code reuse is usually understood as a selling point for microservices!

  • the microservices can be developed and deployed independently
  • different microservices can use different technologies, in particular different programming languages

If this does not seem like an advantage – in particular if all microservices are developed by one team, using one technology stack, and deployed together – then maybe you don't need microservices, but a set of libraries. And you could maintain all libraries/services within one monorepo.

If we ignore any scalability arguments for a moment, libraries are vastly preferable over microservices. A microservice API is effectively dynamically typed, which can be a source of errors – just as you experienced. In contrast, library APIs are usually statically typed, which can prevent a whole class of errors via compile-time type checking. Also, Intellisense is nice. Libraries that run within the same process tend to be much easier to use than distributed systems, which have their own challenges especially around network failures, consistency, and distributed transactions.

Using a microservice architecture means that you accept these drawbacks because microservices allow you to address even bigger problems, such as organizational scalability (let different teams develop and deploy their services independently) and technical scalability (scale different parts of the system separately).

There is possible middle ground between libraries and microservices that can make your life a bit easier.

For example, a microservice can provide a client library for connecting to that microservice. This library handles connection details and provides a set of data transfer objects. If designed correctly, older library versions are still able to interact with newer versions of the microservice, which allows them to be updated somewhat independently. However, this requires that all users of that microservice use the same programming language, and that these users can upgrade their client library within a reasonable time frame. Such an approach may be useful for a team that started using microservices without understanding the implications, but is usually not viable if you are using microservices for their organizational scalability benefits.

A milder version of this is to use a service description language that provides a language-agnostic API definition. Clients can then use code generation tools to generate DTOs and connection libraries in their programming language. This successfully decouples the services from another, while avoiding the possibility of API mismatch errors. Having to work in this service description language also makes it more difficult to accidentally break backwards compatibility. But it does require you to learn additional tools, and the generated code can be awkward to use. This is usually the way to go in more enterprisey environments.

You may also be able to apply advanced testing methods to rule out API mismatch, e.g. record/replay integration tests: first, you run a client's test cases against a real microservice and record the requests and responses. The artifact of this recording is shared by the client and the microservice. The recordings can then be used in the client tests to provide a mock microservice that replays the canned responses. The recordings are also used by the microservice to verify that the service continues to provide the same responses. Unfortunately, updating these recordings to account for changes can be difficult. The recordings can also be very fragile, e.g. if irrelevant metadata like exact dates is not sanitized. I'm currently working on transitioning a similar test suite, but the fragility of depending on exact recorded responses makes this incredibly hard.