Microservices – Have We Come Full Circle Back to Old School Approaches?

client-serverdistributed computingenterprise-architecturejavamiddleware

In terms of software architecture and design, how do microservices "stack up" (pun intended) against middleware? I'm coming from Java, and it seems like as you move away from straight REST as an API, and abstract away different layers and connection parameters, at least in Java, you've almost come full circle back to some very old school ideas. We've come back to virtualization…wheras the JVM is already virtual.

In an agnostic way, you can, and I would argue the advantages to, abstracting a RESTful API to CORBA. Or, in a more java-centric way, JMS or MDB.

At one time EJB was a big deal in Java, then it was recognized to a bit of a cluster eff, but, now, are we back to the beginning?

Or, do microservices offer something which CORBA, or even better, MDB, lacks? When I read (TLDR) Martin Fowler explaining microservices, it strikes me as a good solution to a bad problem, if you will. Or rather, a closed minded approach which introduces a level of complexity only pushing the problem around. If the services truly are micro, and are numerous, then each has a dollar cost to run and maintain it.

Furthermore, if one microservice amongst many changes its API, then everything depending on that service breaks. It doesn't seem loosely coupled, it seems the opposite of agile. Or am I misusing those words?

Of course, there are an indeterminate amount of choices between these extremes.

Shark versus Gorilla…go! (For the pedantic, that's meant to be ironic, and isn't my intention at all. The question is meant to be taken at face value. If the question can be improved, please do so, or comment and I'll fix.)

Envision a multitude of microservices running in docker, all on one machine, talking to each other…madness. Difficult to maintain or admin, and next to impossible to ever change anything because any change will cascade and cause unforeseeable errors. How is it somehow better that these services are scattered across different machines? And, if they're distributed, then surely some very, very old school techniques have solved, at least to a degree, distributed computing.

Why is horizontal scaling so prevalent, or at least desirable?

Best Answer

TL;DR. I have had the pleasure of drinking a lot of Microserver flavored Kool-Aid, so I can speak a bit to the reasons behind them.

Pros:

  • Services know that their dependencies are stable and have had time to bake in
  • Allow rolling deployments of new versions
  • Allow components to be reverted without affecting higher layers.

Cons:

  • You cannot use the new and shiny features of your dependencies.
  • You can never break API backwards compatibility (or at least not for a many development cycles).

I think that you fundamentally misunderstand how a microservice architecture is supposed to work. The way it is supposed to be run is that every microservice (referred to from here on in as MS) has a rigid API that all of its clients agree upon. The MS is allowed to make any changes that it wants as long as the API is preserved. The MS can be thrown out and rewritten from scratch, as long as the API is preserved.

To aid in loose coupling, every MS depends on version n-1 of its dependencies. This allows for the current version of the service to be less stable and a bit more risky. It also allows versions to come out in waves. First 1 server is upgraded, then half, and finally the rest. If the current version ever develops any serious issues, the MS can be rolled back to a previous version with no loss of functionality in other layers.

If the API needs to be changed, it must be changed in a way that is backwards compatible.

Related Topic