Hacked Opinions: The legalities of hacking – Ian Amit

01.10.2015
Ian Amit, from ZeroFOX, talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.

Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.

CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.

What do you think is the biggest misconception lawmakers have when it comes to cybersecurity

Ian Amit (IA):  To put it gently - that cybersecurity could be regulated / legislated. The concept of legislation relies on jurisdiction. Trying to enforce this on something as global as cybersecurity is just an incentive for jumping through those huge loopholes.

What advice would you give to lawmakers considering legislation that would impact security research or development

IA: I believe that legislators, bearing in mind that they are mostly driven / influenced / guided by big money (from corporations with a very specific interest) are far from understanding the broader implications of potentially restricting advancement and development in the field.

Trying to control such research is counter-intuitive to progress, and in the long run actually damages not just innovation on a global scale, but also the very companies that were pushing the legislation in order to protected them in the first place.

If you could add one line to existing or pending legislation, with a focus on research,  hacking, or other related security topic, what would it be

IA: No limitations will be set on activities unless they have a direct negative impact on on human society or pose a direct life threat.

Now, given what you've said, why is this one line so important to you

IA: Research, hacking and other security activities are needed in order for us to advance as a society. As such they should not be limited, as long as they are not directly harming society/human life.

Bearing in mind that governments often engage in life threatening related research (development of weapons), that one liner is obviously targeted to individual/company research. Corporations will obviously try to use it to protect themselves (showing how to hack credit cards would have a negative impact on society), so at the end of the day, I actually think that in-line with my initial commentary (legislation is problematic. period), I'd change that to "don't be an a**hole", and direct it at everyone (researchers, corporations and governments alike).

Although I don't believe that would be the right language from a legal standpoint.

Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work Why, or why not

IA: Companies can do whatever they deem necessary to protect their assets and shareholders. However, legal action as I see it is the equivalent of an armed conflict - a direct continuum of diplomacy and strategy, to which you (are supposed) to get after exhausting all other channels.

There is an inherent imbalance between individuals researchers and corporations, in which each party has an upper hand in a different field. The researcher can be very agile and vocal about something that can have a major impact on a corporation, and the corporate on the other hand has resources it can throw on a researcher to intimidate/bury them legally and financially.

Not to draw too many parallels here, but when researchers resort to terror-like tactics, they should expect to be dealt with a similar counter-action.

However, corporations that have an easy finger on the legal trigger just because they can, should be fined and exposed for their scare tactics - especially if they lack the ability to span the spectrum of said diplomacy (which leads directly back to the previous discussion held here on vulnerability disclosure).

What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government What should the government be sharing with the rest of us

IA: In my personal opinion I'd like to see as much sharing going on as possible. Attack data, successful breach information, threat intelligence, etc.

The government on the other hand should make all of that data, in addition to industry specific threat intelligence and data that they see, back to the organizations.

However, this kind of data sharing should be executed in a manner in which the government has no access to the identity of the sharing organization or pursue legal action against them (unless there is some form of gross negligence in terms of the requirement to disclose material breaches).

This almost by definition means that the sharing platform will be independent, will anonymize any reported data (for the safety and anonymity of the reporter, as well as the users of the data), and will not be tied to the government. It will also have to enforce the data sharing requirements (i.e. if the government is not sharing data, it also doesn't get any data back).

(www.csoonline.com)

Steve Ragan

Zur Startseite