6 key network considerations to take into account before migrating to the cloud

02.06.2016
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

As organizations shift more workloads to the cloud they increasingly rely on networks and infrastructure they don't own or directly manage. Yet this infrastructure is just as critical as when applications and services were hosted in the data center. Being able to explore these networks in advance to identify choke points and routing issues helps to inform better network investment and configuration decisions. It is essential for a successful cloud deployment as well as ensuring an optimized environment on an ongoing basis.

There are six key network considerations you should take into account before shifting to the cloud:

1. Baseline performance before deployment - Getting a baseline measure of network performance when moving to the cloud requires adopting a different set of data points than organizations have typically used in the past. The move to IaaS, SaaS, or any cloud service for that matter, means organizations are beholden to those providers, as well as third-party service provider networks through which the application or service traffic traverses.

Before cloud computing, the network was essentially under an organization’s control. And it’s not that networks are necessarily less complex, it’s just that network teams had access to all the network data, like packets, flows and device level information to monitor performance and security. And when issues arose, teams had the ability to troubleshoot and triage problems.

Traditional monitoring approaches worked in a world where the network was managed and contained, and boundaries well defined. However, these data sources are no longer available for monitoring the performance of the cloud network, where devices can’t be directly instrumented or polled. Visibility into cloud and third-party networks can only come from data sources such as synthetic and end user monitoring. These techniques can provide quality of experience metrics that network teams can rely upon to test configurations and baseline performance before moving to the cloud.

2. See and understand bottlenecks in your infrastructure - It’s possible that, based on existing configurations and routing policies, certain offices and sites are not optimized to consume applications or services over the Internet. For example, a branch office in India may have issues accessing Salesforce because of transcontinental latency and a bandwidth constrained MPLS circuit. Similarly, a branch in Austin may have better and more reliable access to applications being served from El Paso rather than San Antonio.

But how do you know what the ideal configurations are before deploying Time to have a look at your new data sources. The first in your arsenal is derived from a technique you may already be using to monitor application experience in terms of page load and user transaction timings. Synthetic monitoring can be used to gain insight into infrastructure and network performance as well as your application. What’s great about synthetic monitoring is that it works in cloud environments and across the Internet in addition to your data center for in-depth information about how each portion of the network is performing.

3. Map out real traffic paths - Having visibility into specific areas of a distributed network that are problematic, as well as pertinent details that can help determine a root cause, results in more tangible and actionable information for the network you own and those of your providers. To achieve a more complete picture of how traffic is traversing the cloud network, it is essential to combine synthetic tests with ping and traceroute functionality. Now network teams can not only test the reachability of an endpoint or host through the cloud, but also use that data to determine the route or path of packet through a distributed network while measuring transit delays.

Another useful data point is Border Gateway Protocol (BGP) routing table information. BGP knits the Internet together by exchanging routing information between networks. And these routes determine how traffic flows to or from your apps and services. Layering this information into your diagnostic approach helps understand how routing configurations and changes impact reachability, latency, loss, jitter or other performance metrics.

4. Collaborate with SaaS and Cloud providers - When it comes time to work with your SaaS, ISP, and cloud providers, it helps to share your information, related visualizations and diagnostic information in an easily consumable way. That way, when a trouble ticket is generated, it is dealt with more quickly by the vendor or provider. Knowing exactly where a problem exists and the related cause analysis empowers teams to take immediate action and enable the responsible party take immediate action as well.

5. Get things right before deployment - With detailed visualizations and hop-by-hop metrics, it becomes possible to try out a routing change, plan for a new data center or test a roll out of a new application or SaaS service. Synthetic testing layered with contextual data and a detailed visual network topology allows teams to test performance and gain insight into initial infrastructure configurations, plan changes and understand their impact on cloud applications to get things right before deployment.

6. Continually monitor the performance of your network and its impact on applications – The same techniques and data sources that teams use to baseline performance and achieve optimal cloud configurations can also be used on an ongoing basis to ensure continued performance. Active monitoring with additional layered information allows organizations to constantly keep an eye on performance degradation to optimize end user experiences across their network.

The inclusion of visual analysis reduces mean time to troubleshoot and repair issues. As the Internet and cloud computing reshape the enterprise network, incorporating these new data sources, correlation of pertinent data points with a unified visualization is critical to the successful deployment and management of cloud applications and services.

This virtuous cycle of better performance becomes even more important in the future as organizations increasingly rely on the Internet for their critical operations. Growing levels of software-based automation will only increase the number of different network paths traversed and the resulting end user experience. Accurate insights into the entire cloud network enable organizations to troubleshoot and solve their cloud performance problems, alleviating potential cloud migraines and ensuring the business runs smoothly… and painlessly.

(www.networkworld.com)

By Nick Kephart, senior director of product marketing, ThousandEyes

Zur Startseite