Hyper-converged infrastructure: Will the convergence trend transform the data center

15.03.2016
Data centre infrastructure is often complex and costly for businesses to run. But in recent years hyper-converged systems have been touted as a way to offer greater flexibility, scalability and ease of management with on-premise systems.

It builds on the widespread adoption of virtualisation, and to some degree, can be seen as part of the move towards much greater automation of data centre operations.

"The broader vision of it is this software-defined data centre layer where you have got a lot more choices underneath of what your infrastructure is going to be - because most of the intelligence is built into the software," says 451 Research analyst, John Abbott. "It then diverts the workloads to the most appropriate infrastructure"

So what does hyperconvergence mean for your business, and can it be the next step in automating control of the data centre

In simple terms, hyperconvergence is an approach to infrastructure that combines server, storage and network functions, all managed via a software layer.

This means breaking down some of the traditional silos between parts of the data centre, and - unlike converged infrastructure systems - relying on modular, commodity hardware systems. Software defined storage is a key part of this, with hyperconverged appliances using locally attached storage, rather than dedicated SANs.

Importantly, it can also mean that customers have one vendor to contact if anything goes wrong.

"The aim is to simplify IT," says Jesse St Laurent, VP of product strategy at Simplivity. "The amount of complexity is extremely high in IT infrascture and our goal is both from a capital and operating perspective to take those costs down by simplifying the infrastructure."

He says that traditional environment has "ten to twelve different things to manage - appliances and software packages and stuff like that", whereas hyper-converged infrastructure relies on a "single unified interface embedded into something you already use - your hypervisor management toolkit" to manage everything hardware.

There are two main approaches taken by vendors - either offering a pre-integrated hardware appliance, or software which can be downloaded.

For the most part it means that individual hyper-converged modules can easily be added to expand deployments, with virtual machines up and running within 15 minutes, according to some vendors.

And the market for hyper-converged systems is increasingly crowded - not surprising, given the growth being noted by industry-watchers.

According to IDC, sales of hyper-converged systems are set to reach $2 billion worldwide this year, doubling to nearly $4 billion by 2019.

Nutanix was arguably first to offer hyperconverged system in 2011 and is now reportedly valued a around $2 billion. It was quickly joined by Simplivity and other startups such as Scale Computing.

Established vendors have also scrambled to keep up with the market, often through partnerships with the startups.

For example, VMware has partnered with numerous hardware vendors for its VSAN and EVO:RAIL products, including Fujitsu and Dell, while HPE has opted to go it alone with its HC-250 StoreVirtual system. Hitachi Data Systems also has a Hadoop-focused appliance, and EMC has both its VCE VxRack, VxRail and ScaleIO products.

Also, in the last few weeks alone, VCE, Cisco and HPE have revealed more about their intentions in the market, while Juniper and Lenovo have also struck a deal.

So what is driving demand for hyper-converged infrastructure

Early uptake has mostly been among smaller and mid-sized businesses with tighter budgets and smaller IT teams, where appliances might be their core infrastructure. But vendors claim enterprises are now making investments too, where operational costs and the need for agility are more of an issue.

Adoption has frequently centred around virtual desktop infrastructure (VDI) in particular, though disaster recovery and remote office or branch infrastructure deployments are common too.

One of the reasons is the use locally attached storage rather than storage area networks.

"VDI is a good application because it is difficult to work out what resources you need to run it," says 451 Research's Abbott. "It has been hard to do that especially with backend SANs."

But, increasingly, a wider range of virtualised workloads including mission-critical applications are being run on hyper-converged infrastructure, as well as distributed NoSQL and Hadoop-based applications.

The ease with which new modules can be added means systems can be scaled to meet the demands of big-data analytics.

"There are an emerging set of applications - typically cloud native in nature or generated using highly defined software structures to provide resiliency - you start to see those as applications in the hyper-converged world," says VCE's EMEA CTO, Nigel Moulton.

This includes "apps that would run in a Hadoop distribution or that would use containerised Linux or Cassandra-style object-oriented databases as a data store - those sorts of application environments lend themselves to a hyper-converged infrastructure system."

The growing interest in hyper-converged infrastructure highlights a wider point: not all customers are keen to push workloads out to the public cloud with security and performance concerns persisting.

However many enterprises with on-premise infrastructure are keen to realise some of the efficiency benefits of cloud technologies and new application models.

"[IT teams] are starting to see if they can modernise their in-house architectures so that they are at least getting somewhere towards a private cloud," he says.

"The hyper-converged vendors are saying that this is an easier way of doing it - it is modular, supports new application models, is cheaper than traditional big systems and easier to scale."

Jesse St Laurent, VP of product strategy at Simplivity claims the total cost of ownership of its systems can actually be lower than going to public cloud providers such as Amazon Web Services. However, he adds that cost should not be the main consideration for where to place workloads.

"The message is 'don't assume that to get the agility you are seeking in your business, you need to rush out and move to the public cloud'," says St Laurent. "Often IT organisations would prefer to be internal - but it is this need for agility."

However, hyper-convergered infrastructure is also seen as a way of bridging the gap to the public cloud.

The University of Wolverhampton - which has 21,000 students and 2,400 staff - is using HPE's HC 250 hyper-converged appliance, which supports the Helion OpenStack platform. This will make it easier to move workloads to public cloud providers where necessary.

"This gives us the ability to move services from our own private cloud to public cloud and between public cloud vendors," says Dean Harris, assistant director for ICT Infrastructure at the university.

Customers that have invested in hyper-converged infrastructure say the scalability and ease of deployment are some of the main advantages.

Harris says that investing in hyper-converged appliances allows the university's IT department to be far more agile in reacting to business demands, acting as a "broker" to internal resources or external cloud services.

"That puts us in a very strong position as an internal service provider," he says.

Automation is a key aspect of this. Harris adds that easier management means the university's first-line support and service desk can maintain its infrastructure. This frees up systems engineers to "deliver projects of value, and that makes a difference to the business".

"It is not just easier to manage, it is the skill level required," he says.

"Currently I have my systems engineer managing my storage environment because they need to know about firewalls, networking and LUN structures.

"With hyper-converged systems it is just lumps of storage, it is logical, and it is not all about the technical detail, so your level of skill can be reduced and allow your higher level of expertise staff to really focus on the most value-added things."

There are a number of areas where hyper-converged systems fall short, however.

Some say that running large, mission-critical applications is less suited to the hyper-converged model - one of the reasons why adoption has been lower among larger enterprises so far.

Applications such as finance or ERP software would need to be rewritten, and currently are more suited to converged infrastructure.

"When you get to things like Oracle or SAP or the big enterprise apps they don't really run very well on hyper-converged yet," says 451 Research's Abbott.

"It is not such an obvious fit - you need to do a lot more optimisation and administration work with all of those non-distributed traditional enterprise apps with high transaction processing layers and things like that."

According to VCE's Moulton: "If you think of an application like an ERP system or something like Oracle financials, if either of those applications can't see the attached database, they stop working. And in most enterprise organisations that is a heart-stopping moment if it happens."

Another challenge for businesses is a change to the procurement model.

"Some people can't get their head around buying their storage and servers in one," says 451 Research's Abbott.

"Storage and compute does not necessarily evolve at the same pace, so if some new disk drives come out, you might want to buy those. It depends what your lifecycle of purchases is as well."

He adds that while one of the benefits of the hyper-converged model is the "modularity and simplicity", by offering more models and configurations to customers there is a "risk of introducing complexity".

And lock-in remains an issue.

"It depends how you view it," says Abbott. "Nutanix and others say that you are just buying a standard storage and server and that in some ways that's open because it's basically standard - it is not proprietary as such.

"But you have to buy it all from them and you have to rely on whether all vendors are going to be around over the next few years - so that is a problem.

"The ideal would be a separate software layer that you could run any hardware underneath and turn it into a hyper-converged system."

Nevertheless, it is clear that hyper-converged vendors are targeting all manner of applications.

Simplivity's St Laurent says that while VDI has been a good entry point to target early adoption, the company has always been focused on the "core of the data centre".

He says Simplivity customers are already running a wide variety of software, including SQL Server, Exchange, Sharepoint and Oracle, as well as VDI.

It is not quite there yet, though. "Hyper-convergence needs to continue to evolve in terms of the area of IT it can cover," he says.

"Can you hold your seven-year retention deep archive data on these environments Can you run your absolute mission-critical, sub-millisecond response time on these environments

"In terms of the bell curve of the market, hyper-convergence covers some portion of that today, but what is happening is we will see it continue to push out in both directions and we will see it a natural part of the infrastructure."

451 Research's John Abbott believes that hyperconvergence is another step towards software controlling the entire data centre.

"I think hyper-converged will be part of that," he says, "but I don't think it will be all of it."

This is because some converged infrastructure architectures also include "a lot networking stuff which the hyper-converged people don't really address".

"The virtual networking side is where Cisco and VCE and Oracle puts some efforts in that the hyper-converged people haven't done yet."

"The hyper-converged people like to say that they are the next generation after converged infrastructure but I am not sure that is the case. They have done a very good of merging storage and compute, but that is only part of the story."

(www.computerworlduk.com)

By Matthew Finnegan

Zur Startseite