Extortion or fair trade The value of bug bounties

09.09.2015
A security researcher, sitting on what he claims are 30 flaws in various FireEye products, is demanding the security company pay researchers for vulnerability reports.

The confrontation highlights the challenges organizations face when working with the security research community. 

Kristian Erik Hermansen initially said he tried to work with FireEye to fix the vulnerabilities -- and FireEye ignored him. "I tried for 18 months to work with FireEye through responsible channels, and they balked every time,” he said, according to a recent post on CSO.

Digging into the timeline, it appears Hermansen notified FireEye that he found serious issues, but demanded compensation. Since FireEye didn't have a formal bug bounty program in place, Hermansen refused to provide further details of the issues and insisted the company first implement a program for paying researchers. That was a little more than a year ago. FireEye learned of the details of one of the vulnerabilities along with everyone else when Hermansen posted information on Exploit-DB and Pastebin over the weekend.

FireEye said it has repeatedly reached out to Hermansen over the past year to learn what sort of information he has, but he kept asking about compensation. Hermansen told CSO he won’t talk to FireEye unless the company pays him. The current price tag is set at $10,000 per vulnerability.

Many software vendors find themselves in similar, precarious situations. They want to secure their software, but "do not want to be held at ransom, or have vulnerabilities in their products sold to zero-day brokers,” said Ken Westin, a security analyst with Tripwire.

FireEye was at a distinct disadvantage because it lacked a program for paying researchers for their vulnerability reports. Over the past few years, many companies have started offering such programs, with Google and Facebook as notable examples. After years of Microsoft publicly stating it wouldn't pay for vulnerabilities, it finally launched a variety of incentive programs to work with security researchers. The Zero Day Initiative from HP Tipping Point and companies such as Bugcrowd and HackerOne have made it easier to connect security researchers to companies for rewards.

But setting up a bug bounty program isn’t a simple process, and it's even more challenging for large companies with legacy codebases, multiple product lines, and complex ecosystems, said Katie Moussouris, chief policy officer of HackerOne. They need to have processes for developing more secure software, such as doing their own static analysis, threat modeling, fuzzing, and penetration testing their code. In short, the organization needs to adopt a secure development lifecycle or application security program. Developing a vulnerability response program comes later.

“You have to be testing your own code before you can start [a bug bounty program]," said Moussouris. Otherwise, the company winds up paying out for “low-hanging fruit” or issues its own developers could have likely uncovered.

It all boils down to the organization’s security maturity. Regular testing would uncover basic issues and show organizations what to fix so that the same mistakes aren’t introduced over and over again.

Teams need to know how to perform root analysis so that they can understand the scope and extent of the vulnerability, as well as the risk posed, and to assign priority so the issue can be fixed. They need to have the time and expertise in-house to be able to process reports coming in from researchers, or they will wind up paying for basic vulnerabilities and duplicate issues.

It’s also important to have transparency in the processing of bug reports. Perhaps the company's investigation is taking longer than expected, or the bug was previously reported by someone else (or discovered internally). The transparency would clarify why a company decides the report is not really a bug or why the researcher won’t get a payout because the report was a duplicate of an earlier one. 

Smaller companies with a simpler product lineup and infrastructure can skip this first step and use bug bounties to jump-start their security programs, but having an application security plan beforehand is a must for large companies. “You can’t just jump in,” Moussouris said.

A growing consensus is emerging that companies must offer bug bounties if they want to have secure products.

Oracle’s CSO was rebuked by researchers for calling bug bounties “the new boy band” in early August. “Many companies are screaming, fainting, and throwing underwear at security researchers to find problems in their code and insisting that This Is the Way, Walk In It: if you are not doing bug bounties, your code isn’t secure,” Mary Ann Davidson wrote in the blog post, which has since been removed.

Moussouris noted there are many ways to work with security researchers, and the public bug bounty program, where researchers submit reports, then get paid for their efforts, is one model. The important detail is to align the program where researchers submit reports at a time when it’s the most useful for developers. Fixing code after it is released is expensive, so asking researchers to test the code during the beta period would be far cheaper and more effective. This model may appeal to some companies more than a traditional bug bounty program.

For example, Microsoft invited researchers (under Moussouris’ watch) to submit reports for the last version of Internet Explorer before it was production ready. Another way is to not offer bounties for existing vulnerabilities, but to ask for defensive techniques the company can use in future products. Microsoft, again, with Moussouris, launched the Blue Hat prize to work with researchers on mitigation bypasses for memory-based attacks.

“Ask [the researchers], ‘Here’s a tough problem, help me solve it,’” Moussouris said.

FireEye sells appliances, so it faces a challenge in establishing a bounty program most software companies don’t have to deal with: getting the right version of the product to the researchers. It would be too expensive to provide researchers with the latest appliances to find bugs.

FireEye said the vulnerability Hermansen disclosed was present only in the legacy HX product and has been already been updated for the current version. The company knows of only a handful of customers who are still on the older product, and all other customers are unaffected. Making sure the researchers have the right product is critical, but at the moment, this is sustainable only for cloud platforms and software makers.

That doesn’t mean companies with hardware products can’t work with the security researcher community. Moussouris discussed other models, such as invite-only programs, where the company specifically ask top-tier researchers to provide their expertise instead of having a public program anyone can participate in. This way, the company gets high-quality reports about issues the company considers high-risk. An invite-only program is one way to direct resources toward uncovering “actual issues you are worried about,” Moussouris said.

There are lots of problems with this FireEye/Hermansen stand-off, but the whiff of extortion surrounding the controversy is perhaps the most distasteful. Hermansen’s desire to get FireEye to compensate security researchers for their time and skills make him a hero to researchers who want companies to do more than merely crediting them for the discovery at the bottom of vulnerability notices. But his current tactics and comments on Twitter offering to sell the vulnerabilities to others for a price may make companies even more reluctant to engage with security researchers.

Information security relies on companies, researchers, and customers all working together. To some degree, FireEye put itself in a vulnerable position by neglecting to establish an incentive program. But high-profile bullying tactics help no one.

(www.infoworld.com)

Fahmida Y. Rashid