You built a cloud and now they want containers, too

27.08.2015
You built a private cloud at great expense and, despite the initial cost, real savings are being made. And even though you thought the cloud was just what your development teams wanted, they are now clamouring for containers. Why

In common with most enterprise companies, you probably justified the investment in your cloud from an Infrastructure perspective with an emphasis on increasing utilization of physical hardware. The average utilization before virtualization was often below 10%, and virtualization as an enabler of workload consolidation has been a critical tool in ensuring that money spent on hardware is not wasted.

But – and it is a big but – typical enterprise private clouds offer little beyond cost savings and accelerated (virtual) machine delivery to the development teams who consume them. These are certainly valuable, but are rather short of the full promise of cloud.

What development teams are really looking for is a platform with APIs they can use to directly attach their automated software lifecycle tools. To quote one development lead: “I just don’t want to talk to Infrastructure.”

However, what Infrastructure typically gave them instead was just as constrained as the old physical world – with burdensome processes, limited lifecycle automation, the same old problems with patches and absolutely no tooling integration.

In short, servers were now virtual but most of the old pain points for developers still existed – and Infrastructure functions continued to be characterised as a blocker rather than an enabler of change in the development world.

Enter containers.

To quote Wikipedia, a container is “any device … that can be used to contain, store, and transport objects or materials.” While this applies just as well to wicker baskets as software containers, the key thing for IT is how containers differ from virtual servers.

Containers enable development teams to package their software, along with resolved software dependencies, into a single artifact. Containers require a host operating system in order to run – but multiple containers can be run on a single OS instance while maintaining logical isolation from each other (no more do you need an OS license per application component instance to get that isolation).

As containers don’t each need their own OS, they can be started as fast as the wrapped application software can be started (no waiting for an OS instance to boot) and, as they include resolved software dependencies, instantiation on a server is merely a matter of copying the container over and starting it up. The repeatability and abstraction provided by containers enables developers to concentrate on delivering easily deployable and functioning software while someone else provides a managed platform for those containers to run on.

The concept of containers isn’t new. Google has been using their own variety for years (they say that everything at Google runs in containers), Sun introduced a form of containers in Solaris in 2004/2005, and containers have even been available on Windows through products such as Parallels Virtuozzo.

What is new, however, is the shift to thinking of containers as being a developer (rather than Infrastructure) technology and, critically, the emergence of software such as Docker, which provides a single container format that can operate across multiple hardware and OS types.

Enthusiasm in the developer community is high and evolution of both Docker and standardization efforts such as the Open Container Initiative continue at pace, but management tooling for large scale container deployments (such as Kubernetes) is only just beginning to emerge for general use and certainly has not yet reached the degree of maturity of that available for server virtualization.

Does this mean that containers should be avoided for now

No. Containers offer benefits both to Infrastructure (further workload consolidation, but potentially with a reduction in OS license count) and development (single deployable artifact that runs wherever it is put and starts instantly – especially important for those building dynamic scale-out applications). Containers are complementary to server virtualization and will not (and should not) displace it.

What enterprises should be doing, however, is building partnerships across Infrastructure and development teams to pilot the use of containers on top of robust virtualization platforms. Start small, evolve the hosting platform, management tooling and, critically, the overall process together. Waiting just means that more proactive competitors will get the productivity, time to market and cost reduction advantages first.

As Principal Consultant for Virtual Clarity, Chris Buckley helps large and complex organizations modernise their approach to IT Infrastructure. Working with IT organizations to identify the right problems to solve from a business perspective, Chris leads clients through development and implementation of infrastructure strategy.

(www.networkworld.com)

By Chris Buckley, Principal Consultant, Virtual Clarity

Zur Startseite