Utility Computing

Plug and Pay

21.04.2003 von Fred Hapgood
Benötigte Resourcen für ein neues Projekt lassen sich nur schwer vorhersagen. Idealerweise stehen Prozessorleistung und Speicherkapazität ohne langwierige Prozeduren genau dann zur Verfügung, wenn sie gebraucht werden. Hier setzt Utility Computing an.

Quelle: CIO, USA

Utility computing: Who would have thought that a technology with such a pedestrian label would become a top IT story?

During the past two years, most of the leading IT services companies have announced initiatives with that unprepossessing name. All the products and services sold under that banner appeal to a common vision: computing tasks buying what they need and only what they need, automatically, from a huge pool of interoperable resources(potentially as large as the whole Internet). Each task or transaction would have an account and a budget and would run up payables; every resource would record and collect receivables. Computing power would be as easy to access as water or electricity. While the products and services currently being introduced under utility computing do not go this entire distance, they move a long way in that direction.

Consider American Express Executive Vice President and CIO Glen Salow's situation. Like many companies, when AmEx introduces a new product, that action typically triggers traffic surges back onto the enterprise network. Some of that traffic will support marketing efforts, some technical support and some the service itself, such as executing an online transaction. It is critical that adequate resources be in place to support that service, particularly during the early days of an introduction. Yet calculating ahead of time what this demand surge will be is almost impossible.

To date, all a CIO could do was overprovision, but as Salow points out, that imposed a double penalty: paying more than was technically necessary and waiting for the new equipment to be installed and tested. "I don't want to tell marketing that I need six months to have the infrastructure in place," he says. So Salow took a different approach and structured a deal with IBM Global Services to buy storage and processing for delivery over a network, per increment of traffic demand. That is not utility computing in the purest sense, since resource procurement is not calculated automatically or per transaction. But the term still applies because of the much tighter fit it allows between the provisioning and demand curves. The advantages of utility computing are self-evident: Resource use becomes more efficient, and because resource changes are automatic or at least highly automated, it also conserves management time. By contrast, the current system - in which IT hooks up and exhausts large blocks of resources in a general free-for-all, at which point another large block is trucked in and wired in place - looks antediluvian. On paper, at least, the case for the transition to utility computing seems compelling.

A.K.A. Outsourcing?

Unfortunately it is very hard to get from here to there. Companies assemble current systems out of silos of resources, which they then fine-tune to local operating requirements. Some of those resources sit inside the firewall and some outside; some run under Unix and some under Windows; and some are PCs and some are Macs. "Suppose an application is qualified on Solaris 8," says Peter Jeffcock, group marketing manager for Sun Microsystems. "Finding a processor running Solaris 7 will not be helpful." He compares imposing utility computing on the average network to trying to build an electrical power market if every state generated a different brand of electricity.

As a result, many vendors are selling what might be thought of as "outsourced utility computing," in which they provide resources over the Internet, matching delivery to demand at least semiautomatically, perhaps through a webpage. One appeal of such services is their level of automation. Mobil Travel Guide is developing a complicated new mapping service, Mobil Companion, that will support a high level of interactivity between travelers and facilities such as hotels, parks and museums. (For instance, tourists planning a journey will be able to buy tickets and make reservations along their intended route with a few clicks.) But the service will be intensely transactional and prone to unpredictable peaks. "I needed a whole new architecture," says CIO Paul Mercurio, "but I also needed to focus my development team and spend my money on the product, not on building the network." So in October 2002, Mercurio started buying networking resources from another virtual utility computing service, Virtual Linux Server, also from IBM.

Unlike AmEx's Salow, Mercurio is willing to accept a higher degree of dependence on his vendor. He says his level of comfort springs in part from his background in travel reservation services, which have been using utility computing like services for years - travel companies generally pay for resources not by reserving blocks of capacity ahead of time (and still less by wiring in hardware) but by the transaction. Mercurio expects utility computing to move to that same model. "In 10years we won't be needing database administrators," he speculates. "Each transaction will just buy the resources it needs."

Some companies even plan to move almost entirely to the outsourced utility computing model. Recently, Inpharmatica, a British pharmaceutical company, finished participating in a utility computing pilot program just launched by Gateway. "Two to three years ago, we built a 2,300-plus processor compute farm with 25 terabytes of storage," says Inpharmatica CIO Pat Leach. "Building it was very interesting stuff, but we are a drug discovery company, not an IT shop. We would much rather employ people to do innovative analysis than spend time building computers. As demand exceeds capacity, I hope to use compute-on-demand to top up and eventually replace our computer farm."

Profitable Utility

Utility computing by its nature is antagonistic to the idea of drawing a high contrast line between local and external resources; if all resources are interoperable, a transaction should never need to know whether the processing it buys comes from inside or outside. Keith Morrow, CIO of 7-Eleven, buys processing cycles and storage capacity from EDS. However, he plans to extend the concept internally by offering the same relationship to 7-Eleven's divisions, departments and franchisees. He would like to buy processing cycles and storage capacity from EDS, use those to support application processes, and then sell access to those processes internally on a per-transaction basis. The end user would not know - and have no reason to know - he was buying a composite product. (Morrow is moving a step closer in another respect: He lets his system buy its own storage; all he asks is that his network send him a monthly report detailing its purchasing decisions. He still orders processing manually, since that resource comes in pricier units.)

The idea of IT becoming a profit center may seem strange, but it seems like an inevitable consequence of the transition to utility computing. Gateway, for instance, has a tremendous number of demo machines and training workstations doing nothing in hundreds of stores, most of which have T1 data lines already hooked up. Recently the company connected about 8,000 of those machines (using a United Devices' MetaProcessor platform) into the previously mentioned on-demand service that can deliver an astonishing 14 teraflops, making it one of the faster machines in the world. (One of the advantages of buying processing from computer vendors is that as they upgrade their stock, that performance number will rise automatically.) According to Bob Burnett, executive vice president and CTO of Gateway, its big concern was having the retail side of operations be completely unaffected. "We were striving for an obtrusiveness of zero," he says - and he got it.

Local Utility

One way of bringing utility computing inside - given the huge incompatibilities that exist in most established networks - is to dedicate a special computer to the task. Several companies, such as Hewlett-Packard, Inkra and Opsware, sell software that will partition a computer (often a mainframe) into several perfectly interoperable environments, keep track of resource use on a per-transaction basis and bill accordingly. If a transaction requires, say, an unusual operating system, that OS can boot in its partition to support just that transaction. Cognigen, a data analysis and consultancy for the biotech and health-care industries, recently bought some utility computing software from Sun, the Sun Grid Engine, that performs this seeming magic.

According to Darcy Foit, director of IS at Cognigen, the problem that inspired the purchase was the need to optimize execution of a critical program that did not share processor time well. The Grid Engine gave Cognigen's scientists a running view of and access to all processors on their LAN, letting them monitor and schedule their tasks more efficiently. "Since implementation," Foit says, "each scientist has had an average of an extra hour of work time." (Previously that much time was wasted waiting for processors to free up.)

Foit says he is now thinking of taking the natural next step: using the Grid Engine to offer a specialized virtual computing service to external clients. Unlike Gateway, which will talk with anyone, Foit plans to stay within bioinformatics. "Bio companies often need to do validation runs on their computing work," he says, "and perhaps validation by its nature is done best by an independent company."

Daunting Challenges

Those cases might seem like baby steps set against the utility computing utopia - in which any operation has access to any resource - but even they are not without problems. The primary issue for most CIOs will be how much control they lose when renting or borrowing resources instead of owning them, says AmEx's Salow. He notes that he has some concern that either the utility computing vendor or the relationship itself will end up influencing the development of a company's network, perhaps by biasing procurement decisions toward the supplying vendor's products. He says, however, that so far the service, which started in March 2001, has not raised any of those flags.

In some companies, moving procurement out of the capital and into the operating line item might not be simple either. Many CIOs will worry about the security risk of moving critical data onto external machines, though Inpharmatica's Leach thinks the problem is manageable. "I think the security issue is overstated," he says. "Outsourcing is common practice. The United Devices/Gateway facility is just a step along the same road." He says the issue for many companies will be whether to buy an expensive kit that is completely under their control or use a trusted third party and save money. "I think that many small and midsize companies will choose the latter and be more competitive than their larger, more conservative competitors," he says.

One of the tougher transition issues is not technical at all, but cultural and political. A general introduction of charge-by-use will almost inevitably disrupt established chargeback practices. In most organizations, IT has tried to stay out of the highly volatile business of taking money away from people.

"You go into an organization, and IT will tell you, 'We don't do chargebacks - accounting takes care of that.' They're thinking of themselves as technical people, not businesspeople," says Kevin Vitale, CEO of Ejasent, which makes utility computing tools. Vital expects utility computing to change that, not just because it produces a specific, very detailed invoice, but because IT generates the bill. "CIOs are going to have to wrestle with issues of chargeback fairness," he says. "They are going to have to start thinking of information technology as a business resource, not just as something to be kept in running order." Nick van der Zweep, director of utility computing for HP, puts the point this way: "IT people will be people who manage services as opposed to people who work with wires and boxes."

Moving to Utopia

It is worth noting that all this energy is being devoted to merely a halfway version of utility computing. Down the road the utility computing relationship will not be with a certain vendor or a specific mainframe but with the whole Internet. Every piece of the network will participate in a huge free market, buying and selling what it needs as it needs it. CIOs will come to work to find hard drives and RAM that their systems bought through eBay. IT will become a revenue stream, selling itself overnight to buyers in other time zones.

Perhaps that vision is in part a fantasy, though a recent experiment by HP, in which the company implemented its utility computing software on the Grid (a worldwide research network specifically designed to explore ideas in large-scale distributed computing) seems like a big step in that direction. And if it is only a fantasy, it is at least along-standing one. For years, information scientists have been suggesting that without such a flexible and bottom-up system of provisioning, the growing complexity of networks will eventually consume exponentially larger amounts of resource and management time. Since there seems to be no end to increased complexity, it follows that we will eventually find our way, like it or not, to pure utility computing.