Storage considerations when you’re virtualizing the last 20% of your systems

22.12.2015
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Companies today are highly virtualized, realizing many of the benefits of an agile infrastructure. But enterprises often get stuck after virtualizing about 80% of their environment. The ‘last leg’ is difficult primarily because it increasingly taxes the underlying infrastructure, and storage in particular. Lets look at key storage considerations you should consider on your journey to 100% virtualization in three key areas – storage platform architecture, data protection and management.

Virtualization drives infrastructure toward shared or pooled resource models, and the increased use of shared storage is one of the biggest impacts virtualization has on the datacenter.  Increased virtualization drives increased consolidation of applications and data on a storage platform, making resiliency and high availability (five 9’s availability) with no single point of failure a must-have storage architecture requirement.  

Getting to a fully virtualized datacenter also means that the shared storage platform must be able to handle a variety of applications with different I/O and capacity characteristics.  For example, virtualizing the last tier of critical databases may be very demanding in terms of I/O requirements as compared to a document management system. 

While each application is I/O optimized for the I/O stack for disk layout, the consolidation of these workloads creates what is known as the I/O blender effect for storage, which creates an amplified burden for the storage system.  Desktop virtualization greatly compounds the issue, as every desktop’s I/O is blended in with the virtual server farm’s I/O – this is especially an issue during peak login times during the start of a workday or at night for batch and maintenance operations. 

A key consideration when deploying flash in a fully virtualized datacenter is to effectively serve the random I/O from virtualized servers and the bursty I/O from virtualized desktops.  That said, it is impractical to rely solely on flash because the cost of capacity compared to disk is still too high, and typically only a minority of virtualized applications need flash performance. 

That’s why a storage platform that is architected to effectively deploy both flash and disk to fit the needs of an enterprise is critical.  The platform should also be able to flexibly and independently scale the amount of flash and disk in the datacenter as the mix of virtualized applications change and grow.

When a range of applications are consolidated onto a storage platform with pooled resources, it’s also important that the platform is able to provide the right resources to the right applications – so that applications that need high IOPS and low latency are served with the flash performance levels that they need.

Data protection is another reason disk is still an important media in the virtualized data center, providing a cost effective way to store rarely used data. While using traditional data protection methods such as copying data through a backup server from each VM can still be used in a virtualized environment, it becomes impractical as enterprises get to 100% virtualization.

Virtualization does open up new possibilities to efficiently architect data protection and disaster recovery.  A consideration here is that the storage platform must offer efficient snapshot technology that can cost-effectively store snapshots, and must also integrate with the virtualization platform to coordinate data protection workflows to ensure data is in a safe state to backup. 

With the right storage platform and integration points, data does not need to move through the virtualization hosts and the network to protect data – snapshots that only store changed data can be taken directly on the storage platform, alleviating the rest of the infrastructure from the data protection load.  And replicating the snapshots to another site provides a very effective disaster recovery mechanism as well.

A modern snapshot-enabled data protection strategy in fully virtualized environments can dramatically reduce recovery point objectives to one hour or less, speed up recovery time objectives from hours to minutes, and completely automate disaster recovery orchestration to the level of push-button simplicity in the event of a disaster. 

With the right storage architecture, snapshots can be leveraged to create clones for test and development without any extra space or performance overhead. This accelerates enterprise agility by providing a devops capability tied to every application, including the most business critical ones, ensuring that test and development is not impaired by being forced to lag too far behind production workloads as they are often forced to do with traditional methodologies.

Tight integration is key

Getting to a fully virtualized environment means the storage system must tightly integrate with the virtualization platform so the latter not only understands what the storage system is capable of, but also enables storage management directly from the virtualization console.  This allows virtualization administrators to provision and manage storage, without requiring them to become storage administrators. 

With VMware vSphere for example, this essentially means that VM admins should leverage a vCenter plugin to manage their storage systems.  For VM administrators to effectively manage and scale their environment, the storage platform must be simple to manage from an application or workload perspective. Policy-based management is needed so administrators can simply provision storage for VMs from application and service level profiles without having to be experts in the storage constructs such as the RAID groups and block sizes. 

Today, the use of intelligent cloud services is at the leading edge of technologies that can help VM administrators manage and monitor their storage systems at scale, because they remove the need to deploy any monitoring infrastructure on premises.  These systems crunch through the machine data generated from storage systems across the enterprise to help manage the entire storage environment, whether it is centralized at one site or geographically distributed.

Cloud monitoring systems should not only provide monitoring capabilities through a web portal, but also proactively notify administrators of any issues and predict what future needs will be based on trends in the virtualized environment. 

A key consideration here is that all of this must operate at VM-level granularity, and ideally provide insights into the entire stack.  When consolidating applications on to a storage platform for example, “noisy neighbor” issues arise where, unknown to an administrator, a VM can consume an unfair share of resources. Noisy neighbor problems are a prime example of how a cloud service with intelligent VM-level analytics can help find, troubleshoot and alleviate the problem at scale with just a few mouse clicks when the right intelligence is built in.

Server virtualization enables enterprises to be flexible, scalable, always-on, and lays the foundation for private clouds while providing a bridge to the public cloud. On the journey to 100% virtualization, carefully planning key storage considerations while leveraging the latest technologies can remove the thorniest ‘last leg’ obstacles and enable enterprises to realize the potential of a fully virtualized environment.

(www.networkworld.com)

By Sheldon D’Paiva, Sr. Product Marketing Manager, Nimble Storage

Zur Startseite