Judge applies common sense to question of what constitutes a data breach

01.12.2015
Enterprise security is a frustrating game, because IT winning 99.9% of the time isn’t enough. One lucky cyberthief or one careless employee — something completely beyond your control — can cause a data breach, a failure that will stay on your résumé forever. But a small dose of sanity emerged on Nov. 13 when a federal judge ruled that a data breach needs to have actual victims, not merely hypothetical ones.

The ruling, by D. Michael Chappell, the chief administrative law judge for the U.S. Federal Trade Commission (FTC), threw out an FTC complaint against a cancer research lab called LabMD. The matter involved a LabMD employee who violated company policies and downloaded P2P software, inadvertently exposing sensitive patient information on a file-sharing network. The breach, however, was detected and shut down before anyone on the outside saw that information, and no one ever accessed the sensitive data.

This case gets as close as any to the famed philosophy question, “If a tree falls in the forest and no one is around to hear it, does it make a sound” Is it really a data breach if no unauthorized person ever sees or accesses the protected data

It’s not that easy a question. Let’s say you’re in charge of building operations/facilities management and one of your security guards is supposed to make sure that every door in the building is locked as of a certain time at night. You check one night and find the door to your CEO’s office unlocked. You then establish that the security guard simply forgot. Should the guard be disciplined, perhaps even fired Does it make a difference if no one actually entered the CEO’s office during the breach For many people, the fact that someone could have strolled into the CEO’s office quite easily is reason enough to come down hard on the security guard.

“The burden was on Complaint Counsel to prove, initially, that Respondent’s alleged failure to employ ‘reasonable and appropriate’ data security ‘caused, or is likely to cause, substantial injury to consumers,’ as alleged in the Complaint,” the judge wrote in his decision. “The evidence presented in this case fails to prove these allegations. There is no evidence that any consumer has suffered any substantial injury as a result of Respondent’s alleged conduct, and both the quality and quantity of Complaint Counsel’s evidence submitted to prove that such injury is, nevertheless, ‘likely’ is unpersuasive.”

The judge also knocked down an FTC argument that the employee’s P2P mishap meant that a future data breach was likely. “The theory that there is a likelihood of substantial injury for all consumers whose information is maintained on Respondent’s computer networks because there is a ‘risk’ of a future data breach is without merit because the evidence presented fails to demonstrate a likelihood that Respondent’s computer network will be breached in the future and cause substantial consumer injury. While there may be proof of possible consumer harm, the evidence fails to demonstrate probable, i.e., likely, substantial consumer injury.”

This ruling seems like common sense, something that is sadly rare. And it provides some important takeaways. It means that if you detect a breach and close it before data is actually seen by anyone, you should avoid FTC penalties. And given the respect federal judges tend to give to the opinions of other federal judges, this could have consequences far beyond FTC rulings.

It won’t help with civil lawsuits, though, where anything that a party can allege is fair game so long as a judge doesn’t throw it out. But it will help with administrative headaches.

We need to split security holes into three categories, with their own rules and implications. First are holes that are deliberately opened by a cyberthief, to be used now or at some point in the future. Second are holes that are unintentionally opened by authorized employees or contractors, as in the LabMD case. Third are holes that exist because of an intentional but non-malicious vendor effort (such as creating a back door for maintenance or the use of a default password by a sloppy IT administrator).

Under scenario one (hole opened by bad guy), it’s almost impossible to make a strong argument that data was never at risk. Cyberthieves are good at hiding their tracks and placing misleading bogus clues in activity logs. If you believe a cyberthief has been in your system, you need to assume bad things happened. You can’t successfully argue hypothetical damage in those cases.

Under scenario two (employee accident), a quick response can save the day, as LabMd just discovered.

Under scenario three (vendor’s non-malicious hole), things get complicated. If the FTC or some similar agency wants to argue hypothetical damage, duration and knowledge become key factors. How long did the back door exist on your system And how long was it there after your team learned of it

This gets complicated because you could still be in trouble even if these answers are all in your favor. Let’s say that your vendor never told you about this back door and your people didn’t learn of it until bad guys broke in and did serious damage. It feels like you’re blameless, but your people did retain that vendor and install that company’s software. You can — and certainly should — sue that vendor, but that won’t stop your team from getting blamed and sued, too.

But at least Judge Chappell has left you in a better position than you were in before.

(www.computerworld.com)

Evan Schuman

Zur Startseite