Spotting vulnerabilities takes many eyes

02.07.2015
Vulnerabilities can take many forms, and you can't expect to uncover them all unless you have a diverse portfolio of tools to help you in the hunt.

At my company, our vulnerability management program includes regular monitoring of security forums and vendor mailing lists for announced vulnerabilities, as well as an assortment of penetration testing and vulnerability assessments. We use a third-party service to do weekly scans and assessments of our outward-facing infrastructure in order to discover any new application or infrastructure vulnerabilities, such as SQL injection, cross-site scripting and ports that might have been opened after an application modification, for example. I use several third-party pen-testing firms because I figure that using just one could cause us to miss some of the techniques and mind-sets of attackers. I'd like to implement a bug bounty program with rewards paid out upon the discovery of vulnerabilities within our application, but the executive staff hasn't signed off on that idea.

All in all, I think it's a pretty comprehensive vulnerability management program, but there are still vulnerabilities that scanning tools and manual testing just can't discover. This week, two vulnerabilities of that description cropped up. How they came to light is instructive.

The first vulnerability was brought to our attention by a former customer. It seems that over the past few months, he had been receiving email notifications from my company about payments to a bank account that he doesn't own. Fortunately, he decided to let us know about this. After some investigation, I figured out what had happened. You see, my company's application is quite complex and can require considerable configuration for many customers. To streamline implementations, we have several models that contain various configurations, and we choose models depending on the particular customer's requirements. These models, which we call "master assets," might include sales demos, training classes or guidelines for debugging technical issues.

Here's what happened: About six months ago, one of our sales engineers was asked by a prospective customer to demonstrate an automatic payment feature in which an email alert is sent to the customer after a payment is made. The sales engineer input the customer's email address for the purposes of the demonstration. Nothing wrong there. But then he saved his demo as a master asset and neglected to remove the user's contact info. This master asset was then used for other demos, training and even real customer implementations. Normally the users in a master asset are not real people with real email addresses, but for this master asset, there was one user who was very real, and now he was receiving email notifications. Luckily, the payment information was fabricated and no real money was involved. To the user, though, it all looked very real.

After some research we were able to identify where this particular master asset had been used and created a database script to remove the real user's information from all instances where it had been implemented. The next step will be to create a new policy and process governing the creation and use of master assets so that this sort of thing doesn't happen again. We'll also conduct audits of the other 20 or so master assets to ensure there aren't any real data contained within them.

The second vulnerability was discovered by a security engineer for one of our customers who was interested in using our application programming interface to automate some reporting. His company's policy is to assess any API used for business purposes, and his assessment led to the discovery that by changing a value in a standard request, he could obtain data from another account within the application. A standard application assessment would not have discovered this vulnerability, since a similar request is not available through the normal user interface.

This was quite embarrassing, and led us to the realization that we haven't spent enough time assessing the security of our API. In response, I modified our vulnerability management program to include a regular internal and third-party assessment of our API.

As you can see, vulnerability management needs to include any area of an organization that has the potential to be compromised.

(www.computerworld.com)

By Mathias Thurman

Zur Startseite