4 big plans to fix internet security

13.05.2016
The Internet is all-encompassing. Between mobile devices and work computers, we live our lives on it -- but our online existence has been tragically compromised by inadequate security. Any determined hacker can eavesdrop on what we say, impersonate us, and perform all manner of malicious activities.

Clearly, Internet security needs to be rethought. Retrofitting security and privacy controls onto a global communications platform is not easy, but few would argue that it's less than absolutely necessary.

Why should that be Was the Internet built badly No, but it was designed for a utopian world where you can trust people. When the fledgling Internet was populated by academics and researchers communicating with trusted parties, it didn’t matter that trust relationships weren’t well-implemented or communications weren’t secure by default. Today it matters very much, to the point where data breaches, identity theft, and other compromises have reached crisis levels.

To meet the challenge of an Internet teeming with cyber criminals, we've applied a pastiche of half-measures. It's not working. What we really need are fresh, effective trust and security mechanisms.

Here are several promising security proposals that could make a difference in Internet security. None are holistic solutions, but each could make the Internet a safer place, if they could garner enough support.

The Internet Society, an international nonprofit organization focusing on Internet standards, education, and policy, launched an initiative called MANRS, or Mutually Agreed Norms for Routing Security.

Under MANRS, member network operators -- primarily Internet service providers -- commit to implementing security controls to ensure incorrect router information doesn’t propagate through their networks. The recommendations, based on existing industry best practices, include defining a clear routing policy, enabling source address validation, and deploying antispoofing filters. A "Best Current Operational Practices" document is in the works.

“Every ISP that signs up [for MANRS] reduces the danger in their corner of the Internet,” says Geoff Webb, a senior director of security strategy at Micro Focus.

It’s Networking 101: The data packets have to reach their intended destination, but it also matters what path the packets take. If someone in Canada is trying to access Facebook, his or her traffic shouldn’t have to pass through China before reaching Facebook’s servers. Recently, traffic to IP addresses belonging to the U.S. Marine Corps was temporarily diverted through an ISP in Venezuela. If website traffic isn’t secured with HTTPS, these detours wind up exposing details of user activity to anyone along the unexpected path.

Attackers also hide their originating IP addresses with simple routing tricks. The widely implemented User Datagram Protocol (UDP) is particularly vulnerable to source address spoofing, letting attackers send data packets that appear to originate from another IP address. Distributed denial-of-service attacks and other malicious attacks are hard to trace because attackers send requests with spoofed addresses, and the responses go to the spoofed address, not the actual originating address.

When the attacks are against UDP-based servers such as DNS, multicast DNS, the Network Time Protocol, the Simple Server Discovery Protocol, or the Simple Network Management Protocol, the effects are amplified.

Many ISPs are not aware of different attacks that take advantage of common routing problems. While some routing issues can be chalked up to human error, others are direct attacks, and ISPs need to learn how to recognize potential issues and take steps to fix them. “ISPs have to be more responsible about how they are routing traffic,” Webb says. “A lot of them are susceptible to attack.”

ISOC had nine network operators participating in the voluntary program when it launched in 2014; now there are more than 40. For MANRS to make a difference, it needs to expand so that it can influence the market. ISPs that decide not to bother with the security recommendations may find they lose deals because customers will sign with MANRS-compliant providers. Or smaller ISPs may face pressure from larger upstream providers who refuse to carry their traffic unless they can show they’ve implemented appropriate security measures.

It would be great if MANRS became a de facto standard for all ISPs and network providers, but scattered safe neighborhoods are still good enough. “If you require everyone to do it, it is never going to happen,” Webb says.

There have been many attempts to address the issues with SSL, which protects the majority of online communications. SSL helps identify if a website is the site it claims to be, but if someone tricks a certificate authority (CA) into fraudulently issuing digital certificates for a site, then the trust system breaks down.

Back in 2011, an Iranian attacker breached Dutch CA DigiNotar and issued certificates, including ones for Google, Microsoft, and Facebook. The attacker was able to set up man-in-the-middle attacks with those certificates and intercept traffic for the sites. This attack succeeded because the browsers treated the certificate from DigiNotar as valid despite the fact that the sites had certificates signed by a different CA.

Google’s Certificate Transparency project, an open and public framework for monitoring and auditing SSL certificates, is the latest attempt to solve the man-in-the-middle problem.

When a CA issues a certificate, it's recorded on the public certificate log, and anyone can query for cryptographic proof to verify a particular certificate. Monitors on servers periodically examine the logs for suspicious certificates, including illegitimate certificates issued incorrectly for a domain and those with unusual certificate extensions.

Monitors are similar to credit reporting services, in that they send alerts regarding malicious certificate usage. Auditors make sure the logs are working correctly and verify a particular certificate appears in the log. A certificate not found in the log is a clear signal to browsers that the site is problematic.

With Certificate Transparency, Google hopes to tackle wrongly issued certificates, maliciously acquired certificates, rogue CAs, and other threats. Google certainly has technology on its side, but it has to convince users that this is the right approach.

DNS-based Authentication of Named Entities (DANE) is another attempt to solve the man-in-the-middle problem with SSL. The DANE protocol reinforces the point that a sound technology solution doesn’t automatically win users. DANE pins SSL sessions to the domain name system’s security layer DNSSEC.

While DANE successfully blocks man-in-the-middle attacks against SSL and other protocols, it is haunted by the specter of state surveillance. DANE relies on DNSSEC, and since governments typically owns DNS for top-level domains, there is concern about trusting federal authorities to run the security layer. Adopting DANE means governments would have the kind of access certificate authorities currently wield -- and that makes users understandably uneasy.

Despite any misgivings users may have about trusting Google, the company has moved forward with Certificate Transparency. It even recently launched a parallel service, Google Submariner, which lists certificate authorities that are no longer trusted.

Almost a decade ago Harvard University’s Berkman Center for Internet & Society launched StopBadware, a joint effort with tech companies such as Google, Mozilla, and PayPal to experiment with strategies to combat malicious software.

In 2010 Harvard spun off the project as a stand-alone nonprofit. StopBadware analyzed badware -- malware and spyware alike -- to provide removal information and to educate users on how to prevent recurring infections. Users and webmasters can look up URLs, IPs, and ASNs, as well as report malicious URLs. Technology companies, independent security researchers, and academic researchers collaborated with StopBadware to share data about different threats.

The high overhead costs of running a nonprofit took a toll, and the project moved to the University of Tulsa under the auspices of Dr. Tyler Moore, the Tandy Assistant Professor of Cyber Security and Information Assurance. The project still offers independent testing and review of websites infected with malware and runs a Data Sharing Program in which companies contribute and receive real-time data on Web-based malware. Development is underway on a tool to provide more targeted advice to webmasters based upon the type of compromise they have experienced. A beta is expected by the early fall.

But even if a project successfully addresses a security problem, it still has to deal with the practical realities of how to fund its operations.

Then there’s the idea that the Internet should be replaced with a better, more secure alternative.

Doug Crockford, currently a senior JavaScript architect at PayPal and one of the driving forces behind JSON, has proposed Seif: an open source project that reinvents all aspects of the Internet. He wants to redo transport protocols, redesign the user interface, and throw away passwords. In short, Crockford wants to create a security-focused application platform to transform the Internet.

Seif proposes replacing DNS addressing with a cryptographic key and IP address, HTTP with secure JSON over TCP, and HTML with a JavaScript-based application delivery system based on Node.js and Qt. CSS and DOMs will also go away under Seif. JavaScript, for its part, would remain the key cog in building simpler, more secure Web applications.

Crockford also has an answer for SSL’s reliance on certificate authorities: a mutual authentication scheme based on a public key cryptographic scheme. Details are scarce, but the idea depends on searching for and trusting the organization’s public key instead of trusting a specific CA to issue the certificates correctly.

Seif would feature cryptographic services based on for ECC (Elyptic Curve Cryptography) 521, AES (Advanced Encryption Standard) 256 and SHA (Secure Hash Algorithm) 3-256. ECC 521 public keys would provide unique identifiers.

Seif would be implemented in browsers via a Helper application, akin to fitting older televisions with set-top boxes so that viewers can receive high-definition signals. Once the browser vendors integrate Seif, the Helper app won’t be necessary.

There are a lot of intriguing elements to Seif, but it is still early stages. The Node implementation, which would run the Seif session protocol, is currently in development. Even without knowing a lot of the details, it’s clear a proposal this ambitious requires the backing of heavy lifters before it can be presented to users.

For example, a major browser maker -- say, Mozilla -- would need to integrate the helper app, and a major website would have to require that all customers use the browser. Other sites and browsers would follow due to competitive pressures, but the question remains whether anyone with that kind of clout would climb aboard the Seif train.

Trashing everything and starting all over again is not going to happen, so the only option is to make the current Internet harder to attack, Webb says. Instead of trying to fix everything at once, there should be smaller fixes to make it harder to misuse specific portions.

“When your house is on fire and you are waiting for the fire truck to come put water on the house, you save what you can, not walk off to look for a new house,” Webb says.

No one controls the whole Internet, and more important, there's a massive amount of built-in redundancy and resiliency. Fixing it is not a task for only one entity, but a multistakeholder approach involving individuals, corporations, and governments. The ISPs should take charge of fixing the underlying routing issues, but they aren’t the only ones responsible. There are issues with DNS, with how services deploy encryption, and with hardware devices used to connect to services, to name a few.

Governments have been trying, especially with recent attempts to pass security privacy laws. Most of them have died quietly in review because they are too complex or aren’t high enough a priority. But the lack of legislation doesn’t mean governments should fail to get involved.

“You have to fix all of it, but no single person can fix it,” Webb says. “I will do my best, if you do your best.”

The road to a secure Internet is paved with lots of great ideas that have flopped right out of the gate or petered out due to lack of interest. Grand plans always sound promising, but they won’t go far if they don’t take into account technical limitations, practical realities regarding deployment, and costs of adoption. The hard part is drumming up support, developing momentum, and eliciting sustained commitments.

“If someone does fix the Internet, my great-great-great-grandchildren will thank them for it,” Webb says.

(www.infoworld.com)

Fahmida Y. Rashid