How Docker can transform your development teams

31.08.2015
Waiting for the right build has been a historical problem with test environments, while differences between development, test and production have caused defects to escape in production. Virtual Machines solve these problems by sharing a copy of system data, but they can be slow and take gigabytes of disk space. 

Enter Docker, a lightweight, fast virtualization tool for Linux. 

First, anyone on a technical staff can create a test environment on the local machine in a few seconds. The new process hooks into the existing operating system, so it does not need to “boot.” With a previous build stored locally, Docker is smart enough to only load the difference between the two builds. 

This kind of simplicity is common for teams that adopt Docker; if the architecture extends to staging and production, then staging and production pushes can also be this simple. 

Another slick feature is the capability to create an entire new, virtual infrastructure for your server farm that consists of a dozen virtual machines, called the “green” build. Any final regression testing can occur in green, which is a complete copy of production. When the testing is done, the deploy script flips the servers, so now green is serving production code. The previous build, the “blue” build, can stick around – in case you need to roll back. That's called blue/green deployment, and it’s possible with several different technologies. 

Docker just makes it easy. 

Where Windows-based software compiles to a single installer, Web-based software has a different deliverable: The build running on a server. Classic release management for websites involves creating three or four different layers: development, test, production and sometimes a staging environment. The strategy involves at least one server per layer, along with a set of promotion rules. When the software was ready for the next promotion, the build could be deployed to the next level server. 

Virtual Machines changed all that, allowing the server to create as many different servers as the team has members. That allowed each branch to be tested separately, then merged into the mainline for final testing, without spending tens of thousands of dollars on new hardware. Having a virtual machine each also makes it possible for a developer to debug a production problem on a local machine while a tester re-tests a patch to production on a second machine. A tester checks for regressions with the release about to go out, while another five testers test features for the next release, and five developers work on new features in new branches. 

[Related: Top cloud Infrastructure-as-a-Service vendors

The problem with virtual machines is size and speed. Each VM contains an entire host operating system, and creating a virtual machine means allocating gigabytes of space, creating an entire new operating system, then installing the "build" onto that operating system. Even worse, the operating systems runs in application space on your computer – it is literally like having an operating system inside of the host operating system. The boot/install process for a virtual machine can take anything from several minutes to an hour, which is just enough to interrupt flow. Technical staff will likely only be able to host one or two virtual machines on a desktop without a serious loss of speed; trying to get virtual machines created on the network on-demand is an entire "private cloud computing" project. 

Instead of running in application space, Docker runs in the kernel. In other words, it makes itself a part of the operating system. Running in the operating system does limit Docker to modern kernels of Linux, both host machine and container, but it also massively simplifies the task-switching process of the operating system. Having Docker in the kernel eliminates many redundancies that typical VMs would have (it needs one kernel, not one per container) and means that Docker containers do not “boot up,” as they are already up. 

All this combines to make Docker an incredibly fast way to create machines – machines that are exact copies of what will go into production, based on disk images … not a patch to an existing server. 

The capability to stop and save a container in a broken state, then debug later, makes debugging much easier under Docker. If the debugging destroys the environmental condition, or “dirties” the environment in some way, restoring to the broken state is trivial. Docker is also capable or running any applications on any Linux server;  the quick startup and disposable nature of containers makes it fantastic for things like batch processing. 

[Related: Why the open container project is good news for CIOs

There are some tools out there that help you configure and even simulate entire infrastructures with Docker containers, making life easier for the team. The most popular one is Docker Compose. This can reduce what used to be ultra-complex setup processes to a single command. 

Docker on your local machine and a couple cloud servers is one thing; making it production-ready is a different matter entirely. The early days of Docker were like the Wild West when it came to production. The commonly thrown around phrase is "Container Orchestration," which is the practice of taking Dockerized apps and services, and scheduling them onto clusters of compute resources. That means organizations don't care where the containers are running, just that they’re running and serving the right requests, whether that be web traffic, internal services and databases, or messaging queues. 

Today’s big players in orchestration are AWS EC2 Container Service, Docker Swarm and Mesos. Typically orchestration services can manage containers well, but they also may come with other bells and whistles like blue/green deploys, container healing, load balancing, service discovery and inter-container networking. 

When evaluating Docker for production, there are certainly other challenges like logging, and environment variable configuration. One great place to start and see if you are ready to move towards Docker is seeing how close you are to optimal 12 Factor App

Don Taylor's tutorial on Docker at CodeMash walked the audience through installing Docker on a Linux machine, creating a container and executing commands on that container. Best of all, the labs are on github for you to follow along. 

So install a Linux virtual machine, put Docker inside it, explore how to create containers, and decide for yourself if this is a technology worth using in your organization. 

Jared Short contributed to this article.

(www.cio.com)

Matthew Heusser

Zur Startseite