In the Software Defined Data Center, application response time trumps infrastructure capacity management

22.04.2016
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

The adoption of software-defined data center (SDDC) technologies is driven by tremendous potential for dynamic scalability and business agility, but the transition is fraught with complexities that need to be considered.

This ecosystem relies on the abstraction or pooling of physical resources (primarily compute, network and storage) by means of virtualization. With software orchestrating new or updated services, the promise is these resources can be provisioned in real-time, without human intervention. In essence, this is the technology response to the agility demands of the modern digital business.

The term SDDC can be applied to today’s public clouds (certainly Amazon and Google and Microsoft clouds qualify), and tomorrow’s private and hybrid clouds as organizations accelerate their transition towards providing data center infrastructure as a service (IaaS). And as in today’s enterprise data centers, tomorrow’s SDDCs will likely support a mix of packaged applications and applications you develop and maintain.

One of the tenets of the SDDC is that capacity is dynamically scalable; not unlimited, of course, but that’s not necessarily a bad way to think of it. This means capacity is treated differently than in the past. The ability to spin up new servers to meet spikes in demand, automatically connect these based on pre-defined policies, and then destroy them as demand wanes, will become the new SDDC paradigm. Instead of being used for alert-generating thresholds, resource utilization becomes an input to the scale-out algorithm.

The infrastructure may be self-regulating, elastic and automated, but that doesn’t absolve us of the requirement for performance monitoring. Instead, it shifts the emphasis from infrastructure capacity management to application (or service) response time. The applications served can be made up of a medley of components and services relying on different stacks and platforms, requiring at least a few different approaches to performance monitoring. In fact, the adoption of multiple monitoring solutions – while a practical necessity – can lead to some operational challenges, including:

And with dozens, hundreds or even thousands of services required to deliver an application, service quality can no longer be defined by the performance or health of individual components. Virtualization can obscure critical performance visibility at the same time complex service dependencies challenge even the best performance analysts and the most effective war rooms. To attempt this is to face avalanches of false positives, of “internal” alerts that are more informational than actionable.

Consider for a minute a popular SaaS application – Salesforce. It’s delivered as a service from a cloud, just as your own internally-built applications might someday be delivered as services from your private SDDC cloud. How do you, as a member of an IT team responsible for your organization’s application services, evaluate Salesforce service quality

It’s unlikely you care about the number of VMs used, the utilization of these VMs, or the network congestion in the tunnel between the app and DB server tiers. You don’t want to hear about throughput on Salesforce’s REST API to KiteDesk. Instead, you care about the experience of your end users. How long does it take to login to the application How responsive is site navigation Are users encountering errors or availability problems

This example illustrates how the service quality of a SaaS application can be abstracted to end-user experience (EUE). And as your private SDDC increasingly facilitates the delivery of applications as services, it holds a valuable lesson – service quality is in the eye of the consumer.

This doesn’t mean that data center monitoring is passé. You will still need to monitor infrastructure and application services, at least as a means of refining automation behavior and policy definitions. But insight into EUE does provide a lens through which to qualify these internal metrics.

As another example, let’s say a new app server is dynamically added to help support increasing demand, connected to the DB tier via a new network tunnel. The new location of the app server adds a few hundred microseconds to the tunnel’s latency. Could this be a problem

That depends.  For some applications there would be no noticeable difference in EUE. But for others, even a few hundred microseconds of network delay at this tier might degrade the user’s experience to something on the wrong side of tolerable. Clearly, one scenario can be ignored, while the other demands immediate attention – to fix of course, but also to ensure that policies are adjusted to prevent recurrence.

Only EUE can put service quality into meaningful perspective.

While EUE has become popular over the last few years as a shared business and IT metric – a “nice-to-have” view – the abstractions and complexities of the SDDC will make it a critical metric for applications delivered as services.

Just as the applications rely on multiple platforms and stacks, so do your end users. Browsers and mobile apps may dominate the news, but there are many applications that use different client types, ranging from Oracle Forms and SAPGUI to RMI and virtual desktops infrastructures.

As such, you’ll need to choose appropriate EUE monitoring approaches to ensure all of your important applications (at least) are represented. Four popular approaches include:

As IT continues on the path towards service-oriented IT, IT’s role must shift to become a competitive provider of application services – and EUE will become the overriding quality metric used by successful organizations.

(www.networkworld.com)

By Gary Kaiser, Dynatrace

Zur Startseite