Five myths (debunked) about security and privacy for Internet of Things

26.01.2015
Internet of Things (IoT) holds great promise for a more intelligent, efficient, safe and even anticipatory means of human adaptation to the environment, be it natural or manmade.

IoT has the potential to enable improvements to so many facets of life, the list is endless. Its primary advancement is enabling the interconnectedness of "things" and resulting insights and synergies. Yet that same connectedness raises concerns for security and privacy that must be addressed. To advance the evolving discussion on IoT security and privacy, I cite five "myths." Rather than accept them or dismiss them, I believe that they deserve careful consideration.  

Myth # 1: More security means less privacy, and vice versa.

I participated in the IEEE Summit on Internet Governance in December 2014 in Brussels where some suggested that we're dealing with security "versus" privacy. We're not. We should address security "and" privacy. I believe IEEE provides a real service to the global community by promoting that approach. These two concepts go hand in hand. Technically, they have commonalities. They enhance each other.

In terms of similarities, both concepts are about confidence in the way things work. Whatever thing or process people are interacting with, they want to have confidence that that's the thing or process they're getting. People want confidence there's not some nefarious agent -- human or machine -- that compromises their expectations about how a thing or process performs.

[ 5 ways to prepare for Internet of Things security threats ]

To contrast the two concepts, privacy is more about providing information into a system and not being personally harmed by doing so. Privacy stems more from an IoT user's perspective. Security is about creating value and protecting that value. It's often from the providers point of view but it can also be from the point of view of users, if they're receiving value from a system in return for their participation. A smart meter on the home, which records energy use in a granular fashion, can provide value to user and provider -- as long as the user's privacy remains intact and the data on billing and system health remain secure for the provider.

Technically, security and privacy have commonalities. Both rely on encryption, for instance. Methodical design processes will help ensure their protection. And both suffer the same sorts of failures. Engineers who design software or systems without a sense of how adversaries think can overlook exploitable aspects of the design.

Similarly, because individual components of IoT will be parts of systems of systems, the original authors of a component may not consider the security and privacy implications as their component interacts with other components and systems. For instance, researchers have established -- as has the Federal Food and Drug Administration (FDA) -- that a number of personal medical devices (PMD) have encryption flaws, which threaten the security of the devices and the data they record and, in cases, transmit, as well as compromising the privacy of the individual using them.

So this myth is a false dichotomy and that can lead to false choices. Taking a security "versus" privacy view doesn't allow the technical community to accurately describe the choices society has to explore as it determines practical levels of security and privacy. As we know from traditional IT concerns, we can have 100 percent security with no functionality. So a cost-effective, practical tradeoff must be found.

Myth #2: Existing IT security and privacy concepts and practices are sufficient to meet IoT challenges.

I'm originally a theoretician in computer science and, from a theoretical viewpoint, we know how to make things secure and private. But we don't know how to do that efficiently. Priorities always matter. Do we want to spend $1 billion to secure the local bookstore's website No. Nuclear warheads That sounds like a good investment and would be a decision for policymakers. If we desire greater security, the price curve can be pretty steep. In the private sector, an enterprise faces practical tradeoffs: do I want to pay for a security feature or a revenue-generating feature It's not either/or, but there's a tradeoff.

Returning to the efficiency challenge, due to the expectation that with IoT unimaginable numbers of devices and systems will be connected, we need to become exponentially more efficient in security and privacy practices.

Vint Cerf, a "father of the Internet" and chief Internet evangelist at Google, spoke at the Brussels meeting, and he reviewed why he and his colleagues settled on 32-bit Internet Protocol (IP) addresses for the Internet. In a back-of-the-envelope calculation, they found that 2 billion to 4 billion IP addresses might eventually be needed, and 32-bit addresses seemed more than adequate. In future decades, we expect every person to potentially have hundreds or thousands of associated IP-addressed objects. So, orders of magnitude more complexity require orders of magnitude more efficiency. If we're going to scale up to trillions of objects, even a penny an object is too expensive.

Myth #3: Cyber security today is a well-established, mature science that addresses most IoT concerns.

In testifying before Congress in 2012, I said: "The science of cyber security is still in its infancy." The emphasis here should be on the term "science," in terms of an evidence-based foundation for our concepts and practices. I also addressed this in my talk, "Realities & Dilemmas in Cyber Security & Privacy," at Oxford University in December.

One area that needs to be explored: we don't have good cyber-domain models of human, user behavior. What drives us to make good -- or poor -- security and privacy decisions That's critical, because humans are involved in every element of the IoT, including its design, implementation, operation, deployment, maintenance, use and decommissioning.

With humans so integral to the Internet and IoT, we'd better understand ourselves in a scientific fashion. We simply haven't developed scientifically valid models. How do we model user behaviors How do we model engineers' thought processes when they create these systems How do we model the institutions created by humans that will operate in an IoT world How do we model an adversary's mindset and behavior to protect such a potentially large attack surface

The challenge here is that human behavior doesn't have a closed form like math. Encryption, for instance, has a nice, neat, closed form, in terms of how it describes a problem and how it provides a solution. Science is a good way to deal with systems -- like human behavior -- that don't have closed forms. I'm aware that astronaut and pilot behavior has been modeled to streamline spacecraft and jet controls. Digital advertising companies have done online human behavior monitoring for years, with some controversy over privacy issues. Biologists are modeling the behavior of cells. But in the broader, everyday realm of ordinary people, as they interact with IoT, we've only just begun.

I have called for the creation of public-private partnerships that can store and analyze cyber incidents to determine what happened. Human behavior clearly plays a significant role in these incidents and we need to understand that behavior and how to modify it in a scientifically valid manner because the attackers are very good scientists. They build models of how the world works and they incorporate feedback as to whether their techniques are working. We need to have the same sort of savvy.

Myth #4: Software security that works for IT will work for IoT.

On one level, this is not a myth. For instance, the IEEE Cybersecurity Initiative recently published a paper, "Avoiding the Top Ten Software Security Design Flaws," and it's certainly applicable to IoT, though by its stated scope it's not comprehensive. The paper is useful for IT and/or IoT software design in that its chapters discuss fundamental concepts such as "earn or give, but never assume trust," "use an authentication mechanism that cannot be bypassed or tampered with," "authorize after you authenticate," etc.

On another level, however, I think one of the challenges for IoT -- to cite just one example -- is that some traditional, desktop security strategies probably aren't going to work well. What does it mean to patch software in IoT Certainly, in the industrial control system domain technology is fielded for decades; that gear doesn't get a software patch every month. So practices that are becoming efficient for desktop computing and for traditional IT infrastructure may not be relevant to IoT.

Perhaps the biggest challenge I see with IoT is scale. We're going to deal with an IT infrastructure -- a networked infrastructure that connects countless entities, devices and systems. We've never fathomed that before. What are the dynamics, driven by scale We've certainly seen the dynamics of scale evolve in the Internet, where value creation and threats morph over time as the Internet itself undergoes orders of magnitude of change.

With, say, 1,000 people on the Internet, we have one set of dynamics. With a million, it's a different set. When it's a billion or a trillion We'll be stepping into a world we haven't experienced before, that we haven't engineered for. The Internet and computing technologies are really the only places where decade over decade we continue to see an order of magnitude change. What other domain becomes 10 times more efficient or 10 times more capable than it was the previous decade IoT appears to be such an animal.

Myth #5: IoT cybersecurity is a challenge the private sector can meet alone.

The private sector will have to make its own decisions about security and privacy. Yet I'd expect the private sector to help facilitate an information exchange that contributes to the public good. Individual companies may not be motivated to care about the public good without guidance from public policy. We've done this in the United States, for instance, by creating the Federal Aviation Administration, the National Transportation Safety Board and other organizations that analyze events and promulgate rules to protect the public.

That said, I'm not in favor of security by decree. We need a more flexible model that allows secure information sharing for scientific, security event analysis, and access to the validated guidelines that emerge for avoiding future events. It's really about creating and routing that feedback signal of what's working and what's not working so that researchers, enterprises and users of IoT can make informed decisions.

Policymakers need to be well-informed about the issues and willing to devote a measure of our collective resources to meet this challenge. But top-down unvalidated rules in this environment haven't proven effective. Policymakers need to respond to public concerns on Internet and IoT security and privacy. How do we improve that conversation These security and privacy concerns are affecting people today on a personal level, a business level and on a national-security level. There aren't many subjects that run that whole gamut. We're entering new territory.

I'd like to end on an optimistic yet speculative note. I'm sure there was a time 100 years ago when parents assumed that serious childhood illness were common and normal, there wasn't much you could do about it. Children would often die of polio, smallpox. Well, we beat those afflictions. I think there's evidence in other areas that circumstances can change pretty dramatically. They take time and focused effort. So I'll speculate that that will be the case with IoT security and privacy. We'll figure out how to cope with these challenges.

Greg Shannon, PhD, chair, IEEE Cybersecurity Initiative, and chief scientist, CERT Division, Carnegie Mellon University Software Engineering Institute.

(www.csoonline.com)

Greg Shannon

Zur Startseite