My apologies, I am not familiar with OpenShift, but in a vanilla Docker environment I would suggest you create a new image based off a modified container that has the changes you want, then run all new containers from that image. You can pretty easily create another image based on the one you are currently using which has your modified /etc/passwd as well as any other tight controls you want to impose.
For example (I'm using ubuntu:latest since you didn't mention the actual image you are trying to run from). First, run a new container based on the image of choice and execute just one command to create a new user and add to the sudo group:
$ docker run --name test ubuntu:latest useradd -u 1234 -G sudo newuser
Now, create a new image based off the modified image in the new container:
$ docker commit test test-image
<image id>
Now you have a new image called "test-image" you can run new containers from. Run a new container based on that image to see if the new user is in /etc/passwd:
$ docker run --name test2 --rm test-image cat /etc/passwd | grep 1234
newuser:x:1234:1234::/home/newuser:
From now on every image based off of "test-image" will have 'newuser' as a valid user, already in the /etc/passwd file and part of the group sudo.
If you want to do more detailed customization, use the following instead to run your initial container, and when you are happy exit and commit as shown above:
$ docker run --name test -it ubuntu:latest
Note that simply adding the user to the sudo group may not be sufficient to let it run commands as root. Run 'visudo' and modify as appropriate. The example image above from ubuntu needs to have the sudo package and an editor installed before it will allow sudo'ing.
I have an answer that is suitable for my current understanding of Docker. I was advised in the comments to try Minikube, and although undoubtedly this can be spun up quickly, I feared that this would be a rabbit-hole of learning that would get me stuck in tar for weeks. One of my engineering principles is to know when one has reached a cognitive limit for stuffing in new information!
Thus, I set out to resolve this problem in a simple fashion. I had two choices:
- Use the container auto-delete feature in Docker, and set up my own restart system
- Use the Docker restart policy, and set up my own container deletion system
I started on the first of these, with the idea that the process supervisor Monit would be nice to use, partly because it is lightweight, and partly because I am familiar with it. However, it started to feel like the wrong solution, since I'd be working around the core problem that it cannot cleanly get a Docker container process list.
In fact, the second option was much cleaner, and this was amplified by the fact that stopped container clean-up is not actually a priority - it is just to keep things tidy. Of course, I used Docker for this; here's the Dockerfile
:
# Docker build script for Docker Tidy
FROM alpine:3.6
RUN apk update
RUN apk add docker
# See this for BusyBox cron schedules
# https://gist.github.com/andyshinn/3ae01fa13cb64c9d36e7
COPY bin/docker-tidy.sh /etc/periodic/daily/
RUN chmod +x /etc/periodic/daily/docker-tidy.sh
# Start Cron in the foreground
ENTRYPOINT ["crond", "-l", "2", "-f"]
And here's bin/docker-tidy.sh
:
#!/bin/sh
#
# With thanks to:
# http://www.doublecloud.org/2015/05/simple-script-to-list-and-remove-all-stopped-docker-containers/
docker rm -v $(docker ps -a -q -f status=exited)
Finally, one drawback with my solution is that if the host is rebooted prior to a stopped container cleanup, those containers seem to restart as well. I therefore reset the restart policy on those containers prior to starting new ones.
For example, here is how I start the Docker Tidy container itself, on the host. In practice I've tidied up the policy change code into its own script, but this will give the general idea:
#!/bin/bash
# Removes the restart policy from previous containers
CONTAINER_LABEL=docker-tidy-instance
docker ps --all --filter label=$CONTAINER_LABEL --quiet | xargs --no-run-if-empty docker update --restart no
docker run \
--label $CONTAINER_LABEL \
--volume /var/run/docker.sock:/var/run/docker.sock \
--detach \
--restart always \
docker-tidy
Best Answer
I had a similar issue. I was unable to kill process 1. So I had to run another process first. I chose to use a bash process, with a restart loop.
I am using Docker-compose so my container's command file ended up looking like something like this:
Similar is possible when using Docker directly.