Docker is an application container, more focused in distribute apps as containers. In this the app/process running inside the container is the only one running, their init process. If your process need more apps (mysql, mongodb database, etc.) you don't launch these inside your docker container managed by one init daemon, you launch more docker containers, every one inside the docker container.
You can try http://phusion.github.io/baseimage-docker/ that provides a simple init daemon to run more process inside your docker image (this one is Ubuntu based) or you can try LXD http://www.ubuntu.com/cloud/lxd for container technology used more like a system container.
What's the conventional wisdom regarding LXC and RHEL-like systems today?
Personally, I find the current setup somewhat lacking. LXC seems more at the forefront -- certainly more maintained.
How are you implementing them?
In terms of offering it as a virtualization option I am not. I find the current technological setup lacking.
- No username namespace.
- Certain mountpoints are not namespace aware (cgroups, selinux)
- Values in /proc are misleading system globals that dont account for the resource partitioning in namespaces.
- Breaks audit.
I do find it really nice tool for application level containment however. We use namespaces and cgroups directly to contain network and IPC resources for certain user-ran web applications. We provide our own interface to control it. In RHEL7 I'm considering moving this functionality to libvirt-lxc
as the newer revisions of libvirt
support the concept of user ACLs.
For virtualization in terms of a fully initialized system I'm waiting to wee what is offered in RHEL7, but in all honesty, I feel we might only see a good enough solution once we're on a later minor release of RHEL7 and then perhaps only on a technology preview state.
Keep your eye out on systemd-nspawn
something tells me in the next 18 months or so it might take its place is the best tool to do fully linux contained virtualization, be it that the systemd authors make it clear its not secure right now! I wouldnt be surprised if libvirt
drops libvirt-lxc
eventually and just offers a wrapper around systemd-nspawn
with systemd slices defined.
Also, be wary theres been a lot of talk over the last 6 months in regards to re-implementing cgroups as a kernel programmer interface rather than a filesystem interface (perhaps using netlink or something, haven't checked) so systemd should be very hot on the tail of getting that right very quickly.
Are there any advantages to one approach versus the other?
I think the LXC option (not libvirt-lxc) is better maintained. Having read the libvirt-lxc
sourcecode, it feels rushed. Traditional LXC certainly has newer features whicih have been better tested. Both require a degree of compatibility by the init system being ran in them, but I suspect that you'll find LXC slightly more "turn-key" than the libvirt-lxc
option particularly in regards getting distros to work in them.
Can these coexist?
Sure, remember that for all intents and purposes, both are doing the same thing. Organizing namespaces, cgroups and mount points. All the primitives are dealt with by the kernel itself. Both lxc
implementations just offer a mechanism for interfacing with the kernel options available.
Best Answer
Kernel integration isn't just about addressing a desirable feature, but more about making minimally intrusive changes with little downside in the way of performance, code quality, complexity, and future compatibility. Politics are also involved on both sides, and a good relationship with established developers helps get long term commitment and constructive reviews.
It looks like the LXC project figured it out. That said, I don't know the specifics of why previous projects like OpenVZ and linux-vserver didn't get in. Those projects at least provided some experience, justification, and maybe code that proved useful to the goal of mainline lightweight containers.