With the Hype around containers and especially Docker many new Linux distributions were created to run containers. Compared to traditional operating systems, there are some benefits when using such a Linux distribution. In this article we compare 5 popular and promising distributions.
Overview: Docker Operating Systems
There are some benefits when using a Linux distribution which is designed for running containers:
- Pretty small → you only want a minimal OS
- Minimize the overhead of tools
- Atomic updates
- Most of them clustered by default
- Run Docker daemon automatically
- Read-only root file system
- Rollback of atomic updates (dual-partition update scheme)
- Improved stability and security
For this comparison I chose 5 different Linux distributions:
Each of these distributions uses cloud-init which provides a multi-distribution package that handles early initialization of cloud instances. It is important that you have a close look at each distribution’s cloud-init as they support different options.
CoreOS is a new Linux distribution that has been re-architected to provide features needed to run modern infrastructure stacks. The strategies and architectures that influence CoreOS allow companies like Google, Facebook and Twitter to run their services at scale with high resilience.
By default CoreOS is pre-configured with tools to run Linux containers and be distributed. A key feature is that the container runtime (e.g. Docker) is automatically configured on each CoreOS machine. Also CoreOS provides automatic OS updates which means you get all Updates per default and you don’t have to worry about old versions.
On CoreOS you can use docker with fleet. Fleet is a distributed init system which presents your entire cluster as a single init system. You are able to start fleet units by using extended systemd unit files. With fleet you are able to run distributed containerized apps.
A major benefit of running CoreOS is etcd. etcd is a distributed key value store which is used by many projects like Kubernetes, Cloud Foundry and many more. You can use etcd for simple service discovery and much more. Also major cloud providers offer CoreOS support.
Project Atomic integrates the tools and patterns of container-based application and service deployment with trusted operating system platforms to deliver an end-to-end hosting architecture that’s modern, reliable, and secure.
For the Red Hat family members Fedora, RHEL and CentOS there are independent Atomic releases. If you are familiar with one of these you can use them as an Atomic host as well. The Atomic Host Red Hat replaces yum with the rpm-ostree which is used to manage the OS packages with atomic updates which means you can rollback to a previous tree. Also you can create your own custom images for Atomic with the rpm-ostree-toolbox.
One major benefit of Project Atomic is the mature environment of Red Hat. You are able to use tools like SELinux, Kickstart, Anaconda etc. on your Atomic Host.
Snappy Ubuntu Core is a new rendition of Ubuntu with transactional updates – a minimal server image with the same libraries as today’s Ubuntu, but applications are provided through a simpler mechanism. The snappy approach is faster, more reliable, and lets us provide stronger security guarantees for apps and users — that’s why we call them “snappy” applications.
Ubuntu Snappy supports Canonical’s AppArmor kernel security system for delivering human-friendly security. This means snappy let’s you isolate applications from one another completely. Snappy is easily extensible with frameworks like Docker – after adding a framework to your Snappy you can run apps on it.
All OS and application files in Snappy are kept separate and as read-only images. This means updates on Snappy are easy and predictable. With delta management Snappy can also keep the size of downloads minimal. Of course Snappy provides roll backs for system and application updates.
Per default Snappy doesn’t come shipped with Docker but you can easily install Docker as a framework to run apps – in this case Docker containers. The installation is pretty simple:
sudo snappy install docker
When I was trying out Docker on snappy I couldn’t start any containers because of a permission denied error. The bug has already been reported to launchpad.
When we started the RancherOS project, we set out to build a minimalist Linux distribution that was perfect for running Docker containers. We wanted to run Docker directly on top of the Linux Kernel, and have all user-space Linux services be distributed as Docker containers. By doing this, there would be no need to use a separate software package distribution mechanism for RancherOS itself.
Basically RancherOS is an OS made of Containers. It runs the Docker daemon as PID 1 which means that the Docker daemon is the first process that gets started by the kernel. Now another interesting fact is that RancherOS runs 2 Docker daemons, one for the system (System Docker) and one for the user (User Docker) which of course runs within the System Docker. The System Docker is responsible for initiating all system services like udev, DHCP and the console. So instead of systemd, sysvinit or upstart RancherOS uses Docker as an init system and manages all system services as Docker containers.
One of the reasons why RancherOS comes with two Docker daemons is that when you accidentally stop/delete all containers from the user Docker your system still works.
As you run everything in Docker containers you always get latest Docker version with RancherOS. You can easily extend RancherOS by running additional system containers, e.g. you can run your own console container to access your favorite console. Resulting from the small footprint you have to monitor less for security vulnerabilities which means less patches and increased stability.
Since all system services are delivered as Docker containers you don’t need any package management tools such as apt-get or yum. As the kernel and initrd are not Docker containers RancherOS uses Docker packaging and distribution to deliver kernel and initrd updates as well.
RancherOS can also be considered a solution for embedded systems and IOT devices.
VMware – Photon
Photon is a technology preview of a minimal Linux container host. It is designed to have a small footprint and boot extremely quickly on VMware platforms. Photon is intended to invite collaboration around running containerized applications in a virtualized environment.
VMware jumped onto the train of creating a new OS for containerized apps. VMware Photon is a minimal Linux container host which is optimized for vSphere. Photon supports all common containers like Docker, rocket and the Pivotal Garden container specifications, which are based on Vmware Warden. Also VMwares photon comes with an efficient lifecycle management which contains a yum-compatible package manager.
A major benefit of using Photon is Lightwave by VMware. Lightwave offers a centralized identity management for authentication and authorization. It supports many open standards like LDAP, Kerberos, SAML and OAuth 2.0. Basically, you can say Lightwave brings a new layer of container security into your environment. In the figure above you can see how Lightwave supports a centralized identity and access management.
|CoreOS (647.0.0)||RancherOS (0.23.0)||Atomic (F 22)||Photon||Snappy (edge – 145)|
|Package manager||None (Docker/Rocket)||None (Docker)||Atomic||tdnf (tyum)||Snappy|
|Tools||Fleet, etcd||–||Cockpit (Anaconda, kickstart), atomic||–||–|
As so often in Computer Science there is no silver bullet. Your best choice highly depends on your project – if you have a greenfield project you may go with new technologies and if you got a brownfield project you may need to use the proven technologies. This however does not mean that both cannot co-exist. On the other hand, you have to know what you need and what your team is familiar with. Also don’t forget that some of these technologies are still in their infancy and have some issues you might not want to have in production. Another point you have to consider is that these new projects are in constant development, so update might break some processes.
Time will tell how these new operating systems will affect the Server/Cloud/Datacenter world. I think for now proven OSes and these new OSes will exist side by side depending on the workload and your environment.
In conclusion you can say there are many great choices for running your Docker infrastructure. And don’t forget: you can also run your containers on the most old-school OS – no need to switch everything right now.
Looking for a change? We’re hiring DevOps Engineers experienced with NodeJS, Git, Jenkins, Puppet and of course Docker. Apply now!
Get in touch
Looking for another perspective? Codeship recently published a similar comparison, additionally taking into account Mesosphere’s DCOS. Head over to their blog to find out what they think about it.
You might also be interested in our step-by-step tutorial about Docker plugins. If you’re new to Docker we recommend having a look at our introduction to easy containerization an read our three compelling reasons to use Docker with Google Container Engine.