How Google looked to the past to develop a network for the future

18.08.2015
When Google needed to expand its network beyond what could be supported by the commercial switches of the day, it looked back in time for a solution—deploying a decades-old architecture novel to the computer industry but widely used by telephone companies.

The company found that the Clos architecture, originally designed for telephones, also provided an effective way to connect large numbers of chatty servers, and paved the way for Google to build a single control plane to route all its traffic.

Going with Clos was one of the design choices that Google engineers will discuss Wednesday at the Association for Computing Machinery’s SIGCOMM (Special Interest Group on Data Communications) conference being held in London.

“Because we’ve been willing to go outside the box to design our network infrastructure, we’ve learned a lot of lessons,” said Amin Vahdat, a Google distinguished engineer who co-authored the paper accompanying the talk.

To date, Google has largely kept quiet about the design of its own network infrastructure, which now underpins Google’s internal and public operations.

“We don’t have individual compute infrastructures for individual applications. It’s not like we have a Gmail cluster or a Photos cluster. It all runs on shared infrastructure,” Vahdat said.

A unified platform can save the money, because it allows the company to use its compute resources more efficiently.

It has also led to the development new big data technologies such as MapReduce, which wouldn’t work if its network connections had to be configured manually for each new job.

Vahdat had sketched out, at a high level, how the Google network is designed in June at another conference, though the new paper offers the gritty details of all the work Google did to arrive at its current architecture.

The paper describes the five generations of network topologies that Google iterated through in the past decade to get to its current design.

The problem that Google faced was a rapid growth of traffic. Its requirements for network bandwidth roughly doubles every 12 to 15 months.

“We couldn’t buy, at any price, a network that would meet our performance requirements, our size requirements or meet the manageability requirements we had,” Vahdat said.

Even the largest commercially available data center switches, the ones built for telecommunications companies, weren’t really suited to accommodate such an increasing load of traffic, for a variety of reasons.

For one, commercial switches of the day were designed to be managed independently. Google wanted switches that could be managed as a group, just like their servers and storage arrays. This would allow the company to treat an entire data center as if it were a single humongous compute node.

Also, commercial switches were expensive, given that they offered a lot of features to ensure the highest reliability.

Google engineers came to realize that if they designed their network differently than the industry norms, they wouldn’t have to rely on such expensive switches. This is where Clos came in.

Developed in 1952 by engineer Charles Clos for the telephone industry, the Clos topology describes the best way to set up a network with lots of end points that may need to talk with one another.

“It’s a really beautiful idea,” Vahdat said, speaking of Clos. “It basically allows you to take very small commodity switching elements and, by arranging things appropriately, be able to scale out to an arbitrary size.”

“We certainly didn’t invent this, but we were able to rediscover it, leverage it and apply it to our setting,” Vahdat said.

The Clos designed also allowed Google to manage the entire network as a single entity.

Typically, Internet grade switches are geared towards making many of the decisions about the best place to send a data packet so that it gets to its destination. For an open-ended network like the Internet, this approach made sense.

Google didn’t need all that intelligence in its switches, because it already knew the topology of its data center network.

So instead, Google controlled the routes through a centralized operation, which issued instructions to each switch about where to send their packets.

In effect, Google was practicing network virtualization before the term was a buzzphrase, Vahdat said.

The lessons Google learned could be useful to other enterprises and Internet services companies. Many enterprises now have the infrastructure that Google itself had when it started investigating this problem 10 years ago, and may be facing similar limitations, Vahdat said.

Joab Jackson

Zur Startseite