Docker – Should I split up an application in multiple, linked, Docker containers or combine them into one

containersdockervirtualization

Background

I'm currently working on building an application which I want to deploy to Docker containers.

The containers will run on a server of mine. I want to have the ability to run other applications on the same server without bloating up the amount of Docker images that are ran.

The different parts / containers as of now are:

  • Nginx (Reverse proxy, static resources)
  • Node (App frontend)
  • Node (App backend / API)
  • Mongo (Database)

Thoughts

The general idea I have is that each distinct part should be ran as a container. My concern is that if I were to run another application on the same machine I will end up bloating it with an unhandable amount of linked images.

This could be solved by making one image for each application. So that the aforementioned services will be part of one image. Does this conflict with the overall security or purpose of Docker in the first place?

Clarification

Does having multiple services in one Docker image conflict with the purpose of Docker?

Will the overall security benefits of containers be removed when running the services from one image?

Best Answer

Docker themselves make this clear: You're expected to run a single process per container.

But, their tools for dealing with linked containers leave much to be desired. They do offer docker-compose (which used to be known as fig) but my developers report that it is finicky and occasionally loses track of linked containers. It also doesn't scale well and is really only suitable for very small projects.

Right now I think the best available solution is Kubernetes, a Google project. Kubernetes is also the basis of the latest version of Openshift Origin, a PaaS platform, as well as Google Container Engine, and probably other things by now. If you're using Kubernetes you'll be able to deploy to such platforms easily.