4 security metrics that matter

26.08.2015
As security gains greater visibility in boardrooms and C-suites, security professionals are increasingly asked to provide metrics to track the current state of a company's defenses. But which numbers really matter

More often than not, senior management doesn't know what kind of questions it should be asking -- and may concentrate too much on prevention and too little on mitigation. Metrics like the mean cost to respond to an incident or the number of attacks stopped by the firewall seem reasonable to a nonsecurity person, but they don't really advance an organization's security program.

Instead, experts recommend focusing on metrics that influence behavior or change strategy.

"What would you do differently now that you have this metric" asks Caroline Wong, security initiative director at Cigital, a security software and consulting firm. Metrics like mean cost to mitigate vulnerabilities and mean time to patch are helpful if the organization has mature and highly optimized processes, but that doesn't apply to 95 percent of organizations today, she said.

Metrics that measure participation, effectiveness, and window of exposure, however, offer information the organization can use to make plans and improve programs.

Participation metrics look at coverage within the organization. They may measure how many business units regularly conduct penetration testing or how many endpoints are currently being updated by automated patching systems. According to Wong, this basic information helps organizations assess security control adoption levels and identify potential gaps.

For example, while it would be nice to be able to say an organization has 100 percent of its systems patched within a month of new updates being available, that isn't a realistic goal because patching may introduce operational risk to some systems. Looking at participation helps exclude systems that don't fall under the normal patching rules -- and focuses attention on those that should be patched.

Dwell time, or how long an attacker is in the network, also delivers valuable insight. Attack duration information helps security pros prepare for, contain, and control threats, as well as minimize damage.

Surveys have shown attackers spend several months on average inside a company's network before being discovered. They spend the time learning the infrastructure, performing reconnaissance activities, moving around the network, and stealing information.

The goal should be to reduce dwell time as much as possible, so the attacker has less opportunity to achieve lateral movement and remove critical data, Douglas said. Knowing dwell time helps security teams figure out how to handle vulnerability mitigation and incident response.

"The longer attackers are in your network, the more information they can obtain, and the more damage they can inflict," Douglas said.

Defect density, or the number of issues found in every thousand (or million, depending on the codebase) lines of code, helps organizations assess the security practices of its development teams.

Context is key, however. If an application is at an early stage of development, then a high defect density means all the issues are being found. That's good. On the other hand, if an application is in maintenance mode, the defect density should be lower -- and trending downward -- to show the application is getting more secure over time. If not, there's a problem.

An organization may identify defects in the application, but until they've been addressed, the application remains vulnerable. The window of exposure looks at how many days in a year an application remains vulnerable to known serious exploits and issues. The "goal is to have zero days in a year during which serious defects found are known and have not yet been addressed," Wong said.

Management in general likes to focus on security incident prevention, in part due to the legacy notion that organizations can stop all attacks at the perimeter. For example, it might make everyone feel good to see the number of intrusion attempts that were blocked, but there's nothing actionable about that information -- it won't help security teams figure out which attacks were not blocked. "You're not fixing anything," says Joshua Douglas, CTO of Raytheon/Websense.

Mean response time, or how quickly the issue was found and mitigated, is another metric that may be less than helpful. Response time ignores the fact that attackers tend to move laterally through the network. You may fix one issue, but if no one tries to determine what else the attacker may have done, a different system compromised by that same attacker may go unnoticed. Focusing on individual issues alone and not on security as a whole leaves environments vulnerable.

"It's not one and done, it's one and understand," Douglas said.

Another common metric tracked is reduction in vulnerabilities, but it isn't so useful on its own. If a lot of low-level vulnerabilities have been fixed, the organization's risk remains the same while critical issues remain open. Some vulnerabilities mean more than others.

Only 28 percent of executives surveyed in a recent Raytheon/Websense survey felt the security metrics used in their organizations were "completely effective," compared to the 65 percent who felt they were "somewhat effective." Security practitioners need to explain to senior management how to focus on security questions that help accomplish well-defined goals. Otherwise, too much attention is wasted on information that doesn't actually reduce risk or improve security.

"Is that really the best place for you to be spending your limited time and money" asks Wong.

(www.infoworld.com)

Fahmida Y. Rashid