The ironic history of the hybrid cloud

06.11.2015
I was asked to do a talk on the history of the hybrid cloud, so I figured I’d better do my homework and was surprised at what I found. I think how all this came about, and how early it started, is fascinating. What follows is mostly my script on that talk, I hope you find it as interesting to read as I did to research and write.

The underlying concept of cloud computing goes back to the 1960s and the concept of an “Intergalactic Computer Network” by Joseph Carl Robnett Licklider, known as Lick. LIck had an unusual background of psychologist and computer scientist and was referred to as computing’s Johnny Appleseed. His concept of an “Intergalactic Computer Network” at the Advanced Research Projects Agency (ARPA) created by Eisenhower, became the basis for ARPANET, which became the Internet.

It is interesting to note that the frame of reference for this was the mainframe, which was the prevalent form of Enterprise Computing at the time.   Ironically, in many ways cloud computing actually evolved from core concepts that are very mainframe centric. By the way, it should come as some surprise that the mainframe, which was called dead back in the 1980s, is growing at 20 percent year over year according to IBM’s latest financials [Disclosure: IBM is a client of the writer].

However Licklider’s vision went well beyond the initial Internet, which was more about communication. This vision was for everyone on the globe to be interconnected and able to access programs and data at any site from anywhere. This is basically the heart of cloud computing, an old visionary concept made real this century.   Since those early days the Internet evolved towards this vision with the most recent evolution being Web 2.0.  

Likely the biggest step towards the current concept we call cloud computing was the birth of Salesforce.com in 1999. This showcased that the concept of delivering enterprise applications from the Internet then, the cloud now, from a simple website and it was a huge success.  

[ Related: IT leaders plan to further embrace cloud in 2016 ]

While most of the traditional firms watched Salesforce.com with interest it wasn’t until the launch of Amazon Web Services, AWS, in 2002 that it became clear we were in the midst of major change and things got scary. Up until then a large number of firms had attempted to scale out their IT services ‘as-a-service’ and failed spectacularly, the last and perhaps largest until Amazon was Intel. But Amazon didn’t fail, they were massively successful. Amazon’s services included storage, computation and even human intelligence through and interesting service called Amazon Mechanical Turk. (Basically a tightly targeted temp service for remote workers on the Web).

AWS continued to evolve until 2006 when it launched its Elastic Compute cloud, which allowed small companies and individuals to rent computer space where they could run their applications. Amazon set the bar as the first widely accessible cloud computing service.

But they weren’t alone under the Web 2.0 concept. In 2009, Google along with others launched a variety of browser-based apps. Google Apps, which targeted Microsoft Office, was perhaps the most memorable. Suddenly individuals were doing things “in the cloud” that were only done on local computers before.   This step is considered one of the most important and it drove Microsoft to respond with services like Office 365 and One Drive, massively spreading this change.

Assisting behind the scenes was the concept of virtualization provided by firms like VMware and Microsoft [Disclosure: Microsoft is a client of the writer], which allowed single servers to not only host multiple application loads but to be able to shift those loads dynamically and in real-time allowing these cloud implementations to both scale and to become increasingly less expensive. This rapid rise of capability and reduction in cost is largely what continues to drive cloud computing today.

However, the cloud isn’t perfect, a service driven largely on value and rapid scale-out has to give something up and that something is often security and reliability, both of which generally remain stronger when hardware is under IT’s direct control. Outages and breaches in a variety of cloud service providers coupled with laws and compliance rules that limit where data can flow (certain types of data can’t flow across boarders or needs extra protection as a result of regulatory requirements).

One of the most interesting stories about this came to me at a security conference several years ago. It was about a couple of engineers working for a large pharma company that needed to analyze a new drug. They went to IT and were quoted a cost of over a $100,000 and a time frame of six-to-nine months. They went to a cloud provider using credit cards and completed the project in three weeks for a cost of $3,500. At a companywide event they shared an award for saving the firm money only to be fired the following day for breaching company security because, apparently, the work had been done on machines in Eastern Europe, which weren’t adequately protected, putting the entire project at risk.

[ Related: How secure is the hybrid cloud ]

Even though this kind of an exposure was common it wasn’t unusual for an internal audit review flagging excessive credit card use to turn up massive unauthorized Amazon or Salesforce implementations as employees went around IT to use unapproved cloud services. We’d had a big shift of financial control to line organizations and these folks were using that power to build little secret and often illegal roads around IT.

These employee acts coupled with the security, data privacy, reliability, performance and economic concerns, which often can be fluid, created a need for some kind of IT managed bridge between “the cloud” and traditional IT resources giving birth to the concept of the hybrid cloud.   This is basically a virtual service under IT control that shifts loads from a variety of on premise resources to a variety of cloud resources based on an increasingly complex rule set that ideally balances compliance, security, privacy and reliability against cost.

[ Related: IBM pumps up its hybrid-cloud muscle with Gravitant buy ]

Applications and data that trend to being pervasive are moved toward cloud models, applications that trend toward security and mission critical needs may be more pervasive on-premises, but the capability to move rapidly while assuring uptime is core to the overall effort.

It is amazing to me that the concept for “the cloud” goes back almost to my birth and has at its core the very elements that first gave us enterprise computing and IT (though we called it MIS back then).

This concept is particularly attractive to multinational companies which have the greatest need for a variety of flexible services and, as a result, they will likely prefer an equally multinational provider who can not only embrace both traditional and cloud resources, but provide them in close proximity to the enterprise’s locations that need it, but has at its heart the concept that forms the core of the Cloud, the mainframe.

This is not just because the mainframe is what the cloud concept that Licklider’s Intergalactic Computer Network came from, but because it was also created with a huge focus on the kinds of I/O the user needs and the reliability and security IT requires. Granted it had to evolve to include flexibility but that speaks to the last four or so decades of its evolution.

In short, ironically, the most capable hybrid cloud provider of the future could owe its success to the strongest mainframe provider of the past. Funny how we tend to go full circle.  

(www.cio.com)

Rob Enderle

Zur Startseite