Software-Sicherheit

The Big Fix

14.10.2002 von Scott Berinato
Software ist unsicher, fehlerhaft und unzuverlässig. Jahrelang wurden die Kunden mit erbärmlicher Qualität abgespeist. Jetzt ist eine Änderung in Sicht.

Quelle: CSO, USA

LET'S START WHERE conversations about software usually end: Basically,software sucks.

In fact, if software were an office building, it would be built by athousand carpenters, electricians and plumbers. Without architects. Orblueprints. It would look spectacular, but inside, the elevators wouldfail regularly. Thieves would have unfettered access through openvents at street level. Tenants would need consultants to move in. Theywould discover that the doors unlock whenever someone brews a pot ofcoffee. The builders would provide a repair kit and promise that suchidiosyncrasies would not exist in the next skyscraper they build(which, by the way, tenants will be forced to move into).

Strangely, the tenants would be OK with all this. They'd tolerate thecosts and the oddly comforting rhythm of failure and repair that cameto dominate their lives. If someone asked, "Why do we put up with thisbuilding?" shoulders would be shrugged, hands tossed and sighs heaved."That's just how it is. Basically, buildings suck."

The absurdity of this is the point, and it's universal, because thesoftware industry is strangely irrational and antithetical to commonsense. It is perhaps the first industry ever in which shoddiness isnot anathema - it's simply expected. In many ways, shoddiness is thegoal. "Don't worry, be crappy," Guy Kawasaki wrote in 2000 in hisbook, Rules for Revolutionaries: The Capitalist Manifesto for Creatingand Marketing New Products and Services. "Revolutionary means you shipand then test," he writes. "Lots of things made the first Mac in 1984a piece of crap - but it was a revolutionary piece of crap."

The only thing more shocking than the fact that Kawasaki's iconoclasmpasses as wisdom isthat executives have spent billions of dollars endorsing it. They'veinvested - and reinvested - in software built to be revolutionary andnot necessarily good. And when those products fail, or break, or allowbad guys in, the blame finds its way everywhere except to where itshould go: on flawed products and the vendors that create them.

"We've developed a culture in which we don't expect software to workwell, where it's OK for the marketplace to pay to serve as betatesters for software," says Steve Cross, director and CEO of theSoftware Engineering Institute (SEI) at Carnege Mellon University. "Wejust don't apply the same demands that we do from other engineeredartifacts. We pay for Windows the same as we would a toaster, and weexpect the toaster to work every time. But if Windows crashes, well,that's just how it is."

Application security - until now an oxymoron of the highest order, like"jumbo shrimp" - is why we're starting here, where we usually end.Because it's finally changing.

A complex set of factors is conspiring to create a cultural shift awayfrom the defeatist tolerance of "that's just how it is" toward a newera of empowerment. Not only can software get better, it must getbetter, say executives. They wonder, Why is software so insecure? andthen, What are we doing about it?

In fact, there's good news when it comes to application security, butit's not the good news you might expect. In fact, application securityis changing for the better in a far more fundamental and profound way.Observers invoke the automotive industry's quality wake-up call in the'70s. One security expert summed up the quiet revolution with a giddy,"It's happening. It's finally happening."

Even Kawasaki seems to be changing his rules. He says security is amigraine headache that has to be solved. "Don't tell me how to make mywebsite cooler," he says. "Tell me how I can make it secure."

"Don't worry, be crappy" has evolved into "Don't be crappy." Softwarethat doesn't suck. What a revolutionary concept.

Why Is Software So Insecure?

Software applications lack viable security because, at first, theydidn't need it. "I graduated in computer science and learned nothingabout security," says Chris Wysopal, technical director at securityconsultancy @Stake. "Program isolation was your security."

The code-writing trade grew up during an era when only two thingsmattered: features and deadlines. Get the software to do something,and do it as fast as possible. Cyra Richardson, a developer atMicrosoft for 12 years, has written code for most of the company'smajor pieces of software, including Windows 3.1. "The measure of agreat app then was that you did the most with the fewestresources" - memory, lines of code, development hours, she says. So noone built secure applications, but no one asked for them either.Windows 3.1 was "a program made up almost entirely of customers'grassroots demands for features to be delivered as soon as possible,"Richardson recalls.

Networking changed all that. It allowed someone to hack away at yoursoftware from somewhere else, mostly undetected. But it also meantthat more people were using computers, so there was more demand forsoftware. That led to more competition. Software vendors codedfrantically - under the insecure pedagogy - to outwit competitors withmore features sooner. That led to what one software developer called"featureitis." Inflammation of the features.

Now, features make software do something, but they don't stop it fromunwittingly doing something else at the same time. E-mail attachments,for example, are a feature. But e-mail attachments help spreadviruses. That is an unintended consequence - and the more features, themore unintended consequences.

As networking spread and featureitis took hold, some systems werecompromised. The worst case was in 1988 when a graduate student atCornell University set off a worm on the ARPAnet that replicateditself to 6,000 hosts and brought down the network. At the time,events like that were the exception.

By 1996, the Internet supported 16 million hosts. Applicationsecurity - or, more specifically, the lack of it - turned exponentiallyworse. The Internet was a joke in terms of security, easilycompromised by dedicated attackers. Teenagers were cracking anythingthey wanted to: NASA, the Pentagon, the Mexican finance ministry. Theodd part is, while the world changed, software development did not. Itstuck to its features/deadlines culture despite the security problem.

Even today, the software development methodologies most commonly usedstill cater to deadlines and features, and not security. "We have areally smart senior business manager here who controls a large chunkof this corporation but hasn't a clue what's necessary for security,"says an information security officer at one of the largest financialinstitutions in the world. "She looks at security as, Will it cost mecustomers if I do it? She concludes that requiring complicated,alphanumeric passwords means losing 12 percent of our customers. Soshe says no way."

Software development has been able to maintain its old-school,insecure approach because the technology industry adopted aless-than-ideal fix for the problem: security applications, amultibillion-dollar industry's worth of new code to layer on top ofprograms that remain foundationally insecure. But there's an importantsubtlety. Security features don't improve application security. Theysimply guard insecure code and, once bypassed, can allow access to theentire enterprise.

That's triage, not surgery. In other words, the industry has put lockson the doors but not on the loading dock out back. Instead of securingnetworking protocols, firewalls are thrown up. Instead of buildinge-mail programs that defeat viruses, antivirus software is slapped on.

When the first major wave of Internet attacks hit in early 2000,security software was the savior, brought in at any expense tomitigate the problem. But attacks kept coming, and more recently,security software has lost much of its original appeal. That - combinedwith a bad economy, a new focus on national security, pendingregulation that focuses on securing information and sheer fatigue fromthe constant barrage of attacks - spurred CSOs to think differentlyabout how to fix the security problem.

In addition, a bevy of new research was published that proves there isan ROI for vendors and users in building more secure code. Plus, a newclass of software tools was developed to automatically ferret out themost gratuitous software flaws.

Put it all together, and you get - ta da! - change. And not just change,but profound change. In technology, change usually means morefeatures, more innovation, more services and more enhancements. In anyevent, it's the vendor defining the change. This time, the buyers arefoisting on vendors a better kind of change. They're forcing vendorsto go back and fix the software that was built poorly in the firstplace. The suddenly efficacious corporate software consumer is holdingvendors accountable. He is creating contractual liability and pushinglegislation. He is threatening to take his budget elsewhere if thecode doesn't tighten up. And it's not just empty rhetoric.

Mary Ann Davidson, CSO at Oracle, claims that now "no one is askingfor features; they want information assurance. They're asking us howwe secure our code." Adds Scott Charney, chief security strategist atMicrosoft, "Suddenly, executives are saying, We're no longer justgenerically concerned about security."

So What Are We Doing About It?

Specifically, all this concern has led to the empowerment of everyonewho uses software, and now they're pushing for some real applicationsecurity. Here are the reasons why.

Vendors have no excuse for not fixing their software becauseit's not technically difficult to do. For anyone who bothersto look, the numbers are overwhelming: 90 percent of hackerstend to target known flaws in software. And 95 percent ofthose attacks, according to SEI's Cross, among othersexperts, exploit one of only seven types of flaws. So if youcan take care of the most common types of flaws in a pieceof software, you can stop the lion's share of thoseattacks. In fact, if you eliminate the most common securityhole of all - the dreaded buffer overflow - Cross says you'llscotch nearly 60 percent of the problem right there.

"It frustrates me," says Cross. "It was kind of chilling when werealized half-a-dozen vulnerabilities were causing most of theproblems. And it's not complex stuff either. You can teach anyfreshman compsci student to do it. If the public understood that,there would be an outcry."

SEI and others such as @Stake are shining a light on these startlingfacts (and making money in doing so). It has started to have aneffect. Wysopal at @Stake says he's seeing more empowered andproactive customers, and in turn, vendors are desperately seeking waysto keep those empowered customers.

"It's been a big change," he says. "We still get a lot of [customerssaying], We're shipping in a week. Could you look at the app and makesure it's secure? But we're seeing more clients sooner in thedevelopment process. Security always was the thing that delayedshipment, but they've started to see the benefits - bettercommunication between developers, creating more robust applicationsthat have fewer failures. The truth is, it doesn't take that muchlonger to write a line of code that doesn't have a buffer overflowthan one that does. It's just building awareness into the process sothat, eventually, your developers simply don't write buffers withunbounded strings."

In fact, it's a little more complicated than that. Even if, startingtomorrow, no new programs contained buffer overflows (and, of course,it will take years of training and development to minimize bufferoverflows), there's billions of lines of legacy code out therecontaining 300 variations on the buffer-overflow theme. What's more,in a program with millions of lines of code, there are thousands ofinstances of buffer overflows. They are needles in a binary haystack.

Fortunately, some enterprising companies have built tools thatautomate the process of finding the buffers and fixing the software.The class of tool is called secure scanning or application scanning,and the effect of such tools could be profound. They will allow CSOsto, basically, audit software. They've already become part of thesecurity auditing process, and there's nothing to stop them frombecoming part of the application sales process too. Wysopal tells thestory of a CSO who brought him a firewall for vulnerability testingand scanning. When a host of serious flaws were found, the customerliterally sent the product back to the vendor and, in so many words,said, If you want us to buy this, fix these vulnerabilities. Topreserve the sale, the vendor fixed the firewall.

Strong contracts are making software better for everyone. According to@Stake research, vendors should realize that there's an ROI indesigning security into software earlier rather than later. ButWysopal believes that's not necessarily the only motivation forcompanies to improve their code's safety. "I think they also see theliability coming," he says. "I think they see the big companiesbuilding it into contracts."

A contract GE signed with software vendor General Magic Inc. earlierthis year has security officers and experts giddy and encouraged byits language. In essence it holdsGeneral Magic fully accountable for security flaws and dictates thatthe vendor pay for fixing the flaws.

General Magic officials say they weren't surprised by the language inthe contract, but many experts say the company has to be prettyconfident in its products to sign off. The effect of the contract,though, is to improve software in general. The vendor must make secureapplications - or fix them so they're secure - to conform to itscontract with a customer, but that makes the software better foreveryone.

Clout is not limited to the Fortune 500. Sure, it's easy for GE towrite such a contract, given that GE is part of the Fortune 2. Andthere's nothing wrong with CSOs benefiting from GE's clout - thecorporate equivalent of drafting in auto racing.

But there are other ways to force the issue with vendors for CSOs atcompanies smaller than GE (which is everyone but Wal-Mart). One canjoin the Sustainable Computing Consortium at Carnegie MellonUniversity, and the Internet Security Alliance, formed under theElectronic Industry Alliance. The interest groups help companies ofall sizes band together on standardizing contract language and bestpractices for software development.

Some are taking satisfaction in a good old-fashioned boycott, even ifthey are so small as to escape the vendor's notice. Newnham College atthe University of Cambridge in England, with 700 users, recentlybanned Microsoft's Outlook from use on campus because of the virusproblem.

Much of the clout CSOs gain will come from the market evolving. In asense, the software makers create clout for the CSO by asking her todeploy the product for ever more critical business tasks. At somepoint, the potential damage an insecure product could inflict willdictate whether it will be purchased.

"Two years ago, the marketing strategy was to just get it out there.And some of the stuff that went out was really insecure," says theanonymous ISO at the large financial institution. "But now, we justsay, applications don't go live without security. It's asledgehammer."

And it's not a randomly wielded one either. His company has created aformal process to assess vendors' applications and his own company'ssoftware development as well. It includes auditing and penetrationtesting, and the vendors' conforming to overarching security criteria,such as eliminating buffer overflows and so forth. It's not unusual,the security officer says, for his group to spend $40,000 per quartertesting and breaking a single application.

"Customers are vetting us," says Davidson. "Not just kicking thetires, but they're asking how we handle vulnerabilities. Where is ourcode stored? Do we do regression testing? What are our secure codingstandards? It's impressive, but it's also just plain necessary.

"They have to be demanding. If customers don't make security a basiccriteria, they lose their right to complain in a lot of ways whenthings go bad," she says.

At the bank, the security officer says, is a running list of vendorsthat are "certified" - that is, they've successfully met theapplication security criteria by going through the formal process. Thelist is incentive for vendors to clean up their code, because ifthey're certified, they have an advantage over those that aren't thenext time they want to sell software. Vendors, he says, "have eithergone broke trying to satisfy our criteria, or they run through theoperation pretty well. A few see what we demand and just run away. Butthere doesn't seem to be any middle ground."

The government is taking an active role. The image of the governmentin security is that of a clumsy organization tripping over its own redtape. But right now, at least in terms of application security, thegovernment is a driving force, and the government's efforts to improvesoftware are making a joke of the private sector.

In fact, no industry has been more effective in the past year atpushing vendors into security or using its clout (often, that comes inthe form of regulation) to effect change.

At the state level, legislatures have collectively ignored the UniformComputer Information Transactions Act (UCITA), a complex law thatwould in part reduce liability for software vendors (most majorvendors have backed UCITA).

Federally, money has poured into the complex skein of agencies dealingwith critical infrastructure protection, which has taken on a life ofits own since 9/11. Equally important but not as well publicized, thefeds fully implemented in July the National SecurityTelecommunications Information Systems Security Policy no. 11, calledNSTISSP (pronounced nissTISSip), after a two-year phase-in. The policydictates that all software that's in some way used in a nationalsecurity setting must pass independent security audits before thegovernment will purchase it.

The government has for more than a decade tried to implement such apolicy, but it has been put off. Vendors have routinely been able toreceive waivers through loopholes in order to avoid the process. TheJuly move is considered a line in the sand. With national security oneveryone's mind, experts believe waivers will be harder to come by.The Navy is telling kvetching vendors to use NSTISSP no. 11 as a wayto gain a competitive advantage. At any rate, products will have to besecured, or the government won't buy them. Like GE's contract, thismakes software better for everyone.

The ability of the public sector to whip vendors into shape onapplication security is best represented, though, by John Gilligan,CIO of the Air Force, who in March told Microsoft to make betterproducts or he'll take his $6 billion budget elsewhere. It was achallenge by proxy to all software vendors. At the time, Gilligan saidhe was "approaching the point where we're spending more money to findpatches and fix vulnerabilities than we paid for the software." And hewasn't shy about labeling software security a "national securityissue."

Microsoft Chief Security Strategist Charney called himself a "nudgeand a pest by nature," and he may have found his counterpart inGilligan, who in addition to mobilizing the Air Force is encouragingother federal agencies to use similar tactics. Gilligan says he wasencouraged by Bill Gates's notorious "Trustworthy Computing" memo - hismea culpa proclamation in January that Microsoft software must getmore secure - but that "the key will be, what's the follow-through?"

Nudging Vendors

Gilligan is right, and clever, to invoke patches as a major part ofhis problem. If a vendor is not convinced that securing applicationsis a good idea after getting proof of an ROI from securingapplications early, or after gaining the favor of large customers bysubmitting to a certification process or to a contract with stronglanguage, then patches might do the trick.

Patches are like ridiculously complex tourniquets. They are theterrible price everyone - vendors and CSOs alike - pays for 30 years ofinsecure application development. And they are expensive. Davidson atOracle estimates that one patch the company released cost Oracle $1million. Charney won't estimate. But what's clear is that theeconomics of patching is quickly getting out of hand, and the vendorsappear to be motivated to ameliorate the problem.

At Microsoft, it starts with security training, required for allMicrosoft programmers as a result of Gates's memo. Michael Howard,coauthor of Writing Secure Code, and Steve Lipner, manager ofMicrosoft's security center (Patch Central), are running the effort tomake Microsoft software more secure.

The training establishes new processes (coding through defense indepth, that is, writing your piece of code as if everything aroundyour code will fail). It sets new rules (security goals now go inrequirements documents at Microsoft; insecure drivers are summarilyremoved from programs, a practice that Richardson says would have beenheresy not long ago). And it creates a framework for introducingMicrosoft teams to the concept of managed code (essentially, reusablecode that comes with guarantees about its integrity).

A year and several hundred million dollars later, it's still not clearif the two-day security training for Microsoft's developers is givingthem a fish, or teaching them to fish. Richardson seems to believe thelatter. She says the training starts with "religion, apple pie andhow-we-have-to-save-America speeches." And, she says, it includes atleast one tough lesson: "You can't design secure code by accident. Youcan't just start designing and think, Oh, I'll make this secure now.You have to change the ethos of your design and development process.To me, the change has been dramatic and instant."

To Microsoft customers, it's a more muted reaction. Since Gates'sproclamation, gaping security holes have been found in InternetInformation Server 5.0, reminding the world that legacy code will liveon. Even the company's gaming console, Xbox, was cracked - indicatingthe pervasiveness of the insecure development ethos and how hard itwill be to change.

Microsoft also faces an extremely skeptical community of CSOs andother security watchdogs. Don O'Neill, executive vice president forthe Center for National Software Studies, says, "When it comes totrustworthy software products, Microsoft has forfeited the right tolook us in the face."

So let's end where conversations about application security usuallybegin: Microsoft.

Richardson's reaction to Gates's memo was not much different thananyone else's. "I wondered how much of this was a marketing issuecompared with a real consumer issue," she says.

The memo has become a reference point in the evolution of applicationsecurity - the event cited as the start of the current sea change. Intruth, the tides were turning for a year or more, and if a date mustbe given, it would be Sept. 18, 2001, one week after 9/11 and the daythat the Nimda virus hit. Microsoft's entering the fray - as it didwith the Internet in 1995, also via a memo - is more an indication thatthe latecomers have arrived, a sort of cultural quorum call.

It was, "We're all here so let's get started," the beginning of theera of application security as a real discipline, and not an oxymoron.