Hacked Opinions: The legalities of hacking – Jeff Williams

13.10.2015
Contrast Security's Founder and CTO, Jeff Williams, talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.

Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.

CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.

What do you think is the biggest misconception lawmakers have when it comes to cybersecurity

Jeff Williams (JW): Lawmakers tend to think politically, which can lead to surprisingly unsophisticated thinking about systemic problems like cybersecurity, crumbling infrastructure, and global warming.  The knee jerk reaction to something unpopular is to create a legal regime that targets the bad actor.  This might take the form of tort law, hacking back, financial penalties, etc…

The misconception — the faulty assumption — is that we can accurately identify these bad actors. Even in the highly publicized cases, like Sony and OPM, this so-called “attribution” problem is extremely labor intensive and doesn’t produce compelling results. Given this, legislation targeting attackers is almost certainly ineffective political theater that won’t actually protect anyone.

Even focusing on information sharing reeks of this bias. The concept is that if we can simply share information about attacks and respond faster then we can win. Sort of military thinking. But when the enemy is completely anonymous and untraceable, this strategy is doomed to failure.

Legislators would be much better served by focusing on encouraging better defenses. I think this is best served through creating visibility rather than creating liability for “less than rigorous” software development.

The idea is to fix the “asymmetric information” problem in the software market, and encourage the market to produce strong code rather than attempt (and certainly fail) to legislate or regulate it.

What advice would you give to lawmakers considering legislation that would impact security research or development

JW: Security research and development is a critical part of the cybersecurity ecosystem.  Rather than worrying about the small amount of potential harm that this research might cause (which is real), they should focus on the enormous downside of preventing this research from being performed.

Currently, security research drives many of the processes that our companies and agencies use to keep themselves secure. Although there are only a small number of researchers and they are only testing a tiny fraction of the software and devices on the market, their work pushes vendors and organizations alike to do better.

Preventing these researchers from doing their work would have a huge ripple effect causing widespread insecurity across the country.

Again, the government should do everything that it can do to make security visible. This enables everyone from producer, to consumer, to evaluator, to buyer and end user to make informed decisions about what level of security they want. Enabling the market to work is the only way to improve cybersecurity at planet level.

If you could add one line to existing or pending legislation, with a focus on research,  hacking, or other related security topic, what would it beJW: I’m going to focus on the “Breaking Down Barriers to Innovation Act of 2015,” which attempts to fix some of the problems with the broad language in the Digital Millennium Copyright Act (DMCA) Section 1201 that prevents anyone from “circumventing” any “technological measure” that “effectively” controls access to a copyrighted work, and on selling hardware or software tools that can break or bypass DRM.

The problem with this rule is that it’s frequently used to threaten security researchers and prevent innovative (if unexpected) uses of technology.

This proposed legislation helps to fix some of these problems, but doesn’t go nearly far enough.  Fundamentally, I believe the public has an incredibly important interest in the software that they use.  Yet they are prevented from checking that software to see if it is something that they can trust.  The recent Jeep Cherokee and Volkswagen incidents are incredibly compelling reasons to empower everyone to do their own analysis.

So I would love to add this line to the “Breaking Down Barriers to Innovation Act of 2015”:

“Notwithstanding anything in the DMCA, the prohibition against circumventing a technological measure shall only apply to circumvention carried out in furtherance of infringement of a protected work."

Now, given what you've said, why is this one line so important to you

JW: This is important because it would allow anyone – researcher and the general public — to do their own security verification of the software that they are trusting everything (finances, healthcare, privacy, defense, even happiness) to.

The DMCA was designed primarily to prevent people from stealing content from DVDs.  Making this change wouldn’t affect the ability of the law from being used for its intended purposes.

Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work Why, or why not

JW: Of course not. There’s very little chance that this type of action will either 1) prevent security research from being revealed, or 2) recoup any losses related to the security research.

However, it’s a lock that the issue will generate a great deal of negative PR so that even more people eventually find out about the issue. You also risk a retaliatory strike by groups like Anonymous and Lulzsec, such as the one that they launched against Geohot (George Hotz) after he hacked the Sony Playstation and they pursued him legally.

If a researcher has discovered something, then you should respond as though the “bad guys” already know about it. Take responsibility, fix the issue, get great PR for your security program.  Use this as an opportunity to build trust with your clients.

What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government What should the government be sharing with the rest of us

JW: The short answer is, “it doesn’t matter.”  The idea here is that government facilitated sharing of information about attacks will enable a sort of “herd immunity.”

Well, that might work if we were actually any good at detecting attacks, but it turns out most attacks go on for months or years before being detected. So the vast majority of these attackers are never going to get identified by the first company, which means that sharing information about the attack won't help.

There's nothing wrong with sharing a little information, but don't think that this is going to make us any more resilient against attack.  The information shared are IP addresses and domains of suspected attackers and compromised computers.  According to the most recent Verizon 2015 DBIR report, a lot of this information comes from honeypots.

These are systems placed on the Internet to trick hackers into attacking them, and thereby gathering information about their sources and methods. Real attackers are likely to focus their attacks, not blindly scan the Internet.  So their information is unlikely to be in the honeypots, and therefore won't get shared.

All the arguing about information sharing legislation is not a total waste, but almost. The crisis really isn't that we're not sharing information:  The crisis is that we have huge numbers of systems that are basically totally unprotected against cyber attack. We need to create a software market that rewards organizations that put appropriate protections in place.  That's a problem we are never going to solve with information sharing.

But there is a role for government in fixing the dreadful state of the software market. Currently, security is an afterthought. We need to change the incentives so that software companies are encouraged to produce secure code.  I'm not a fan of liability or taxation regimes.  How about some legislation that requires companies to disclose some basic facts about the security of their software.

Things like: was security testing done, where developers train in security, are basic defenses in place, and are components free of known vulnerabilities. Many other industries have labels and and data sheets that disclose this kind of information. Why not software This is a powerful, non-intrusive way for government to help fix the security of our nations infrastructure.

(www.csoonline.com)

Steve Ragan

Zur Startseite