Use a Dockerfile instead of a batch script

bashdeploymentdocker

I'm new to Docker and feeling my way around. My plan is to build a typical web app using Nginx+Rails+Postgress, all of which will be in the one container. I'm not (currently) doing anything complex like linking images.

I'm a lone developer, and my build process thus far is:

  1. Edit Dockerfile
  2. docker build
  3. Fix bugs, and if I like the outcome then commit the Dockerfile to a git repo.
  4. Iterate over steps 1-3 as I change the build.
  5. docker push my/image periodically, as useful versions emerge.

Why instead would I not:

  1. docker pull a basic image e.g. Ubuntu
  2. docker run -t -i my/image /bin/bash
  3. wget http://git.host.com/installation-script.sh | bash
  4. If bugs then scrap image and edit installation-script.sh to fix bugs.
  5. Iterate over 1-4.
  6. docker push my/image periodically, as above.

I'm aware of issues with 'wget shell-script | bash', however it would be more familiar to me.

Instinctively I feel that using a Dockerfile is the best way to go, but I'm not sure why. I think it would be useful for Docker beginners to understand why Dockerfile is (or isn't) best practice. If I was deploying linked containers would I realise the awesome power of the Dockerfile? Does Dockerfile affect the "quality" (size, whatever?) of the final image?

Best Answer

One great reason for doing as much work as you can in the Dockerfile is Docker caching. A great practical example of this is for skipping bundle install on rails apps every time you test a new build.

You might be able to emulate caching using your script combined with a docker volume, but you'd be stuck doing a lot of your own plumbing work.

Related Topic