This post is meant to catch-up on the new projects appearing on the container namespace like containerd, runc, kata containers, cri-o, rkt, how are they different to Docker? and where does Docker stand today?
To make things interesting, here's a timeline of the container space that I have come up with. Lets go through the milestones to see the history of the container evolution till the time Docker appeared in the scene. After going through this we will fallback to the original discussion about container runtimes apart from Docker. Here we go-
-> 1979 AD: 'chroot' system call was introduced in the Unix V7. That marks the beginning of process isolation and segregating file access for each process.
-> 2000 AD: FreeBSD-Jails created, these were used by small shared-environment hosting providers to achieve isolation and also had the option to assign an IP address for each system and configuration.
-> 2001 AD: Linux VServer, with capabilities similar to Jails but now embedded into the Linux kernel.
-> 2004 AD: The first public beta of Solaris Containers was released that combines system resource controls and boundary separation provided by zones
-> 2005 AD: OpenVZ an operating system-level virtualization technology for LinuopenVZ history of containersx which uses a patched Linux kernel for virtualization, isolation, resource management and checkpointing.
-> 2006-2007 AD: Google launched 'Process Containers' for limiting, accounting and isolating resource usage (CPU, memory, disk I/O, network) of a collection of processes. It was renamed "Control Groups (cgroups)" later and merged to Linux kernel 2.6.24.
-> 2008 AD: LXC the first and most complete implementation of Linux container manager was deployed using cgroups and Linux namespaces, and it works on a single Linux kernel without requiring any patches.
-> 2011 AD: Cloud Foundry created 'Warden', which could isolate environments on any operating system, running as a daemon and providing an API for container management.
-> 2013 AD: Let Me Contain That For You (LMCTFY)! Started as an open-source version of Google's container stack, providing Linux application containers. This later merged back to Google's libcontainer and is now a part of the Open Container Foundation.
-> 2014 AD: Enter "Docker". With Docker, containers exploded in popularity. It's no coincidence the growth of docker history of containersDocker and container use goes hand-in-hand.
Similar to Warden, Docker also used LXC in its initial stages and later replaced that container manager with its own library, libcontainer. But there's no doubt that Docker separated itself from the pack by offering an entire ecosystem for container management.
So our story for today begins post 2014. Where container meant just Docker and its adoption grew in leaps and bounds. The advent of container orchestrators/Frameworks like Apache Mesos, Marathon & Kubernetes made deployments robust. By 2016, the container space was sky-rocketing and docker decided to split the monolith into separate parts, some of which other projects can even build on — that's how containerd happened. That was Docker v1.11.
Containerd is a daemon that acts as API facade for various container runtimes and OS. When using containerd, you no longer work with syscalls, instead you work with higher-level entities like snapshot and container — the rest is abstracted away. If you want to understand containerd even more in depth, there's a design documentation in their GitHub repo - https://github.com/containerd/containerd
Under the hood, containerd uses runc to do all the linux work.
What's not seem to be discussed that much is that with Docker 1.11 another separate component is containerd-shim. This is the parent process of every container started and it also allows daemon-less containers (so e.g. upgrading docker daemon without restarting all your containers, which was a big pain, yay!).
So what is actually Docker nowadays? Docker still provides nice end-to-end experience when it comes to containers especially for developers. Docker consists of several components — the one we are most familiar with is the proprietary and user facing is dockerd.
Unfortunately that's not the end of the story. There's still many projects that we haven't even touched. Let's explore some more container runtimes-
CRI-O is getting a lot of publicity which is an interface to decouple Kubernetes from various runtimes — CRI-O is an implementation of that which is OCI compliant. To this you can plug for example containerd (through cri-containerd) or rkt — where cri-o stands is nicely described in this blogpost. But by default it uses only runc.
RKT is a project originally created by CoreOS, which now belongs to RedHat CoreOS, it is probably the closest to an actual Docker competitor, rkt implements a modern, open, standard container format, the App Container (appc) spec, but can also execute other container images, like those created with Docker.
Singularity is the standard de-facto container runtime in the academic world for running HPC workloads. In the HPC (High Performance Computing) context, scheduling resources is an essential feature that can considerably determine the performance of the system. This type of applications run a wide range of computationally intensive tasks in various fields, some examples including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling, biological macromolecules and physical simulations. Centers and companies that use this type of applications cannot run the risk to run Docker on their environment because it simply does not fit their use case. They cannot afford to use a Root owned daemon process and security flaws arising due to that.
Kata containers is another interesting project that provides a secure container runtime with lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense. Kata Containers community is stewarded by the OpenStack Foundation (OSF), which supports the development and adoption of open infrastructure globally.
Podman is an open-source project that is available on most Linux platforms. Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on your Linux System. Podman provides a Docker-compatible command line front end that can simply alias the Docker cli, `alias docker=podman`.
Containers under the control of Podman can either be run by root or by a non-privileged user. Podman manages the entire container ecosystem which includes pods, containers, container images, and container volumes using the libpod library. Podman specializes in all of the commands and functions that help you to maintain and modify OCI container images, such as pulling and tagging. It allows you to create, run, and maintain those containers created from those images in a production environment.
So thats the container scene now. Comment here with updates as you see it evolving and I will keep updating the post. Agree thats its difficult to catch up, but evolution is good and lets see how far this goes!
What Is GitFlow?
GitFlow is a branching model for Git, created by Vincent Driessen. It has attracted a lot of attention because it is very well suited to collaboration and scaling the development team.
One of the great things about GitFlow is that it makes parallel development very easy, by isolating new development from finished work. New development (such as features and non-emergency bug fixes) is done in feature branches, and is only merged back into main body of code when the developer(s) is happy that the code is ready for release.
Although interruptions are a BadThing(tm), if you are asked to switch from one task to another, all you need to do is commit your changes and then create a new feature branch for your new task. When that task is done, just checkout your original feature branch and you can continue where you left off.
Feature branches also make it easier for two or more developers to collaborate on the same feature, because each feature branch is a sandbox where the only changes are the changes necessary to get the new feature working. That makes it very easy to see and follow what each collaborator is doing.
Release Staging Area
As new development is completed, it gets merged back into the develop branch, which is a staging area for all completed features that haven't yet been released. So when the next release is branched off of develop, it will automatically contain all of the new stuff that has been finished.
Support For Emergency Fixes
GitFlow supports hotfix branches - branches made from a tagged release. You can use these to make an emergency change, safe in the knowledge that the hotfix will only contain your emergency fix. There's no risk that you'll accidentally merge in new development at the same time.
How It Works
New development (new features, non-emergency bug fixes) are built in feature branches:
Feature branches are branched off of the develop branch, and finished features and fixes are merged back into the develop branch when they're ready for release:
When it is time to make a release, a release branch is created off of develop:
The code in the release branch is deployed onto a suitable test environment, tested, and any problems are fixed directly in the release branch. This deploy -> test -> fix -> redeploy -> retest cycle continues until you're happy that the release is good enough to release to customers.
When the release is finished, the release branch is merged into master and into develop too, to make sure that any changes made in the release branch aren't accidentally lost by new development.
The master branch tracks released code only. The only commits to master are merges from release branches and hotfix branches.
Hotfix branches are used to create emergency fixes:
They are branched directly from a tagged release in the master branch, and when finished are merged back into both master and develop to make sure that the hotfix isn't accidentally lost when the next regular release occurs.