What cloud computing means for your applications

05.04.2016
I’ve written a number of times about the sea change in technology that is occurring and how it’s affecting enterprise IT organizations. Most recently, I wrote about how open source is eating the technology industry and how that will affect these organizations. 

It’s no exaggeration to say that IT is witnessing more change now than it has ever before seen. I expect more innovation – and turmoil – in the industry over the next five years than in the past 20. And all of that innovation has a common underpinning: cloud computing. Cloud computing is enabling – and driving – all of this innovation and disruption; from the perspective of IT it’s important to understand what this implies for the most important activity IT undertakes: applications. Applications, after all, is where all the value of IT lies. Everything else is just an enabler. 

So what does cloud computing mean for your applications 

Let’s start by looking at the canonical enterprise stack, circa 2010, as represented in Figure 1. 

The foundation of 2010 enterprise IT is legacy infrastructure. The key feature of legacy infrastructure is how slow and expensive it is. Everything takes forever – weeks or months to procure and install equipment, and that’s after the capital is obtained to purchase it. And, by the way, since all of the processes associated with racking and stacking are manual and take forever, the infrastructure, once installed, is incredibly difficult to change, so it’s static. 

[Related: 4 principles that will shape the future of IT

The application tooling running in that legacy infrastructure is primarily proprietary software packages – think Java application servers and relational databases from IBM and Oracle. The processes used by application groups in this environment are slow and deliberate. ITIL is a common governing process, featuring change control boards and infrequent application modifications. Which is OK – slow application processes are masked because the underlying infrastructure takes so long to change. If you’re racing with a turtle, you don’t really have to be very fast to look good. 

And the primary application interface is the browser. Used by people. For the most part, driven by stable workload processes. Like invoice processing. The user base doesn’t change much, the numbers of users doesn’t vary much, and the applications change infrequently. 

So overall, a tightly aligned marriage of infrastructure, tooling and workloads. Everything slow-moving and stable. 

Of course, this overview sounds idyllic. Of course, there are always applications that didn’t fit very well to this environment. The externally facing website with huge jumps in traffic and user numbers during the holiday shopping season. There are always business units that want to try an experiment, but can’t because by the time the experiment is built, the opportunity would be passed. And development and test – well, they’re always bellyaching because there’s no equipment available for them. But because of the primacy of the traditional applications, these unusual use cases are always treated as exceptions that don’t justify upsetting the current state of affairs. 

What’s happening today is that these “exceptions” have become the rule. The relationship between companies and their customers has gone digital. Mobile applications are rapidly becoming the de facto way those relationships take place, with Web taking a secondary interface role. Companies want to mind the vast amounts of data their digital interactions generate. And looming on the near horizon is the shift to machine learning and the Internet of Things. 

Figure 2 depicts the new enterprise stack. The common foundation for all of these interactions and interfaces is cloud computing. The public cloud providers have changed everything about infrastructure expectations. The new assumption is that infrastructure will be immediately available, low-cost and scalable to whatever extent you need. Static is out the window, discarded in favor of agile. 

Many people assume the key challenge for enterprise IT groups is at the infrastructure level. Nothing could be further from the truth. The working assumption by all infrastructure consumers – i.e., developers, application groups, IT executives and business unit customers – is that infrastructure capability will meet the normal: fast, cheap, and scalable. If the on-premises environment meets those requirements, fine. If it doesn’t, nothing in the world will persuade those consumers to stick with an inferior offering. 

Instead, the key challenge for enterprise IT is to reconfigure the layer above infrastructure – the application tooling. We are going to see enormous change in the kind of applications that are built, the software components used to build them, and the processes by which they are delivered. Put bluntly, the infrastructure change affects certain portions of the IT operations groups; this change will affect everyone. 

I wrote about open source in “4 principles that will shape the future of IT,” but suffice it to say that everything interesting going on in software is based on open source. Proprietary can’t innovate quickly enough, and is unaffordable at the scale required for these applications. 

Beyond this, the core architecture of enterprise applications will have to change. Monolithic code bases running in proprietary application servers can’t change fast enough to keep up with “run the business” update requirements. This pace of change demands breaking applications up into service-based applications, aka microservices. 

The execution environments for those services will change as well. Virtual machines, notwithstanding their many virtues, are too large for distributed code components. In addition, their lengthy instantiation timeframes means it’s hard to respond quickly enough to erratic application loads. The solution to these issues is to move to a different execution environment: containers. There is huge interest in containers within enterprise IT organizations, but until their usage moves from developer workstations to production environments, those organizations will not be able to meet the code deployment and execution speeds needed for microservice-based applications. 

[Related: 5 cloud computing predictions for 2016

For all but the largest and most sophisticated IT organizations, trying to write orchestration (or scheduling) for container-based microservice applications is much too challenging. Mainstream IT organizations will leverage a PaaS or container scheduling framework to manage their distributed applications. Again, these will be open source-based, because that will drive the fastest innovation and largest ecosystem for this critical application enabler. 

The framework portion of this new application stack is simultaneously the most important and most difficult decision IT groups will make over the next two years. Important because the capabilities of this portion dictates whether these groups will be able to meet company and market requirements for application richness and update frequency. Difficult because all of the contenders in this space are low to moderate maturity. Essentially, one has to bet on the outcome of a horse race while many of the entrants are still entering the starting gate. 

And, of course, the tools can’t solve the process problem. Absent a restructuring of process, adopting containers or a framework is like dropping a bigger engine into a car with flat tires. One can look to enormous disruption in IT organizations as they seek to blend roles and groups in an effort to streamline application lifecycles. Some employees will resist this trend, while others will embrace it. Transformation is one of the most difficult tasks for leaders, far harder than improving the performance of an existing but suboptimal organization. Again, the opinion of participants is unimportant; the expectation is that application lifecycles must accelerate, and any roadblocks will be removed. 

Unlike previous changes in IT, which tended to change one part of the people/process/technology triad while leaving the other aspects undisturbed, this shift is occurring in all three domains at once, which means an awful lot of balls in the air. However, the ongoing digital shift in business practices means that change cannot be deferred; there is a palpable sense in the air that business-as-usual is no longer sufficient. The bottom line is that you can expect enormous attention to be focused on the application tooling and process layer as IT organizations seek to handicap the field and place their bets, and prepare their staff to deal with the outcome.

(www.cio.com)

Bernard Golden

Zur Startseite