Effective IT security habits of highly secure companies

31.05.2016
When you get paid to assess computer security practices, you get a lot of visibility into what does and doesn’t work across the corporate spectrum. I’ve been fortunate enough to do exactly that as a security consultant for more than 20 years, analyzing anywhere between 20 to 50 companies of varying sizes each year. If there’s a single conclusion I can draw from that experience, it’s that successful security strategies are not about tools -- it's about teams.

With very good people in the right places, supportive management, and well-executed protective processes, you have the makings of a very secure company, regardless of the tools you use. Companies that have an understanding of the importance and value of computer security as a crucial part of the business, not merely as a necessary evil, are those least likely to suffer catastrophic breaches. Every company thinks they have this culture; few do.

The following is a collection of common practices and strategies of the most highly secure companies I have had the opportunity to work with over the years. Consider it the secret sauce of keeping your company's crown jewels secure.

The average company is facing a truly unprecedented, historic challenge against a myriad of threats. We are threatened by malware, human adversaries, corporate hackers, hacktivists, governments (foreign and domestic), even trusted insiders. We can be hacked over copper wire, using energy waves, radio waves, even light.

Because of this, there are literally thousands of things we are told we need to do well to be “truly secure.” We are asked to install hundreds of patches each year to operating systems, applications, hardware, firmware, computers, tablets, mobile devices, and phones -- yet we can still be hacked and have our most valuable data locked up and held for ransom.

Great companies realize that most security threats are noise that doesn’t matter. They understand that at any given time a few basic threats make up most of their risk, so they focus on those threats. Take the time to identify your company’s top threats, rank those threats, and concentrate the bulk of your efforts on the threats at the top of the list. It’s that simple.

Most companies don’t do this. Instead, they juggle dozens to hundreds of security projects continuously, with most languishing unfinished or fulfilled only against the most minor of threats.

Think about it. Have you ever been hacked using a vector that involved SNMP or an unpatched server management interface card Have you even read of such an attack in the real world Then why are you asking me to include them as top priorities in my audit reports (as I was by a customer) Meanwhile, your environment is compromised on a near daily basis via other, much more common exploits.

To successfully mitigate risk, ascertain which risks need your focus now and which can be left for later.

Sometimes the least sexy stuff helps you win. In computer security, this means establishing an accurate inventory of your organization’s systems, software, data, and devices. Most companies have little clue as to what is really running in their environments. How can you even begin to secure what you don’t know

Ask yourself how well your team understands all the programs and processes that are running when company PCs first start up. In a world where every additional program presents another attack surface for hackers, is all that stuff needed How many copies of which programs do you have in your environment and what versions are they How many mission-critical programs form the backbone of your company, and what dependencies do they have

The best companies have strict control over what runs where. You cannot begin that process without an extensive, accurate map of your current IT inventory.

An unneeded program is an unneeded risk. The most secure companies pore over their IT inventory, removing what they don’t need, then reduce the risk of what’s left.

I recently consulted for a company that had more than 80,000 unpatched Java installations, spread over five versions. The staff never knew it had so much Java. Domain controllers, servers, workstations -- it was everywhere. As far as anyone knew, exactly one mission-critical program required Java, and that ran on only a few dozen application servers.

They queried personnel and immediately reduced their Java footprint to a few hundred computers and three versions, fully patching them across most machines. The few dozen that could not be patched became the real work. They contacted vendors to find out why Java versions could not be updated, changed vendors in a few cases, and implemented offsetting risk mitigations where unpatched Java had to remain.

Imagine the difference in risk profile and overall work effort.

This applies not only to every bit of software and hardware, but to data as well. Eliminate unneeded data first, then secure the rest. Intentional deletion is the strongest data security strategy. Make every new data collector define how long their data needs to be kept. Put an expiration date on it. When the time comes, check with the owner to see whether it can be deleted. Then secure the rest.

The best security shops stay up on the latest versions of hardware and software. Yes, every big corporation has old hardware and software hanging around, but most of their inventory is composed of the latest versions or the latest previous version (called N-1 in the industry).

This goes not only for hardware and OSes, but for applications and tool sets as well. Procurement costs include not only purchase price and maintenance but future updated versions. The owners of those assets are responsible for keeping them updated.

You might think, “Why update for update’s sake” But that’s old, insecure thinking. The latest software and hardware comes with the latest security features built-in, often turned on by default. The biggest threat to the last version was most likely fixed for the current version, leaving older versions that much juicier for hackers looking to make use of known exploits.

It’s advice so common as to seem cliché: Patch all critical vulnerabilities within a week of the vendor’s patch release. Yet most companies have thousands of unpatched critical vulnerabilities. Still, they’ll tell you they have patching under control.

If your company takes longer than a week to patch, it’s at increased risk of compromise -- not only because you’ve left the door open, but because your most secure competitors will have already locked theirs.

Officially, you should test patches before applying, but testing is hard and wastes time. To be truly secure, apply your patches and apply them quickly. If you need to, wait a few days to see whether any glitches are reported. But after a short wait, apply, apply, apply.

Critics may claim that applying patches “too fast” will lead to operational issues. Yet, the most successfully secure companies tell me they don’t see a lot of issues due to patching. Many say they’ve never had a downtime event due to a patch in their institutional memory.

Education is paramount. Unfortunately, most companies view user education as a great place to cut costs, or if they educate, their training is woefully out of date, filled with scenarios that no longer apply or are focused on rare attacks.

Good user education focuses on the threats the company is currently facing or is most likely to face. Education is led by professionals, or even better, it involves co-workers themselves. One of the most effective videos I’ve seen warned of social engineering attempts by highlighting how some of the most popular and well-liked employees had been tricked. By sharing real-life stories of their fallibility, these co-workers were able to train others in the steps and techniques to prevent becoming a victim. Such a move makes fellow employees less reluctant to report their own potential mistakes.

Security staff also needs up-to-date security training. Each member, each year. Either bring the training to them or allow your staff to attend external training and conferences. This means training not only on the stuff you buy but on the most current threats and techniques as well.

The most secure organizations have consistent configurations with little deviation between computers of the same role. Most hackers are more persistent than smart. They simply probe and probe, looking for that one hole in thousands of servers that you forgot to fix.

Here, consistency is your friend. Do the same thing, the same way, every time. Make sure the installed software is the same. Don’t have 10 ways to connect to the server. If an app or a program is installed, make sure the same version and configuration is installed on every other server of the same class. You want the comparison inspections of your computers to bore the reviewer.

None of this is possible without configuration baselines and rigorous change and configuration control. Admins and users should be taught that nothing gets installed or reconfigured without prior documented approval. But beware frustrating your colleagues with full change committees that meet only once a month. That’s corporate paralysis. Find the right mix of control and flexibility, but make sure any change, once ratified, is consistent across computers. And punish those who don’t respect consistency.

Remember, we’re talking baselines, not comprehensive configurations. In fact, you’ll probably get 99 percent of the value out of a dozen or two recommendations. Figure out the settings you really need and forget the rest. But be consistent.

“Least privilege” is a security maxim. Yet you’ll be hard-pressed to find companies that implement it everywhere they can.

Least privilege involves giving the bare minimum permissions to those who need them to do an essential task. Most security domains and access control lists are full of overly open permissions and very little auditing. The access control lists grow to the point of being meaningless, and no one wants to talk about it because it’s become part of the company culture.

Take Active Directory forest trusts. Most companies have them, and they can be set either to selective authentication or full authentication trust. Almost every trust I’ve audited in the past 10 years (thousands) have been full authentication. And when I recommend selective authentication for all trusts, all I hear back is whining about how hard they are to implement: “But then I have to touch each object and tell the system explicitly who can access it!” Yes, that’s the point. That’s least privilege.

Access controls, firewalls, trusts -- the most secure companies always deploy least-privilege permissions everywhere. The best have automated processes that ask the resource’s owner to reverify permissions and access on a periodic basis. The owner gets an email stating the resource’s name and who has what access, then is asked to confirm current settings. If the owner fails to respond to follow-up emails, the resource is deleted or moved elsewhere with its previous permissions and access control lists removed.

Every object in your environment -- network, VLAN, VM, computer, file, folder -- should be treated the same way: least privilege with aggressive auditing.

To do their worst, the bad guys seek control of high-privileged admin accounts. Once they have control over a root, domain, or enterprise admin account, it’s game over. Most companies are bad at keeping hackers away from these credentials. In response, highly secure companies are going “zero admin” by doing away with these accounts. After all, if your own admin team doesn’t have super accounts or doesn’t use them very often, they are far less likely to be stolen or are easier to detect and stop when they are.

Here, the art of credential hygiene is key. This means using the least amount of permanent superadmin accounts as possible, with a goal of getting to zero or as near to zero as you can. Permanent superadmin accounts should be highly tracked, audited, and confined to a few predefined areas. And you should not use widely available super accounts, especially as service accounts.

But what if someone needs a super credential Try using delegation instead. This allows you to give only enough permissions to the specific objects that person needs to access. In the real world, very few admins require complete access to all objects. That’s insanity, but it’s how most companies work. Instead, grant rights to modify one object, one attribute, or at most a smaller subset of objects.

This “just enough” approach should be married with “just in time” access, with elevated access limited to a single task or a set period of time. Add in location constraints (for example, domain admins can only be on domain controllers) and you have very strong control indeed.

Note: It doesn’t always take a superadmin account to be all powerful. For example, in Windows, having a single privilege -- like Debug, Act as part of the operating system, or Backup -- is enough for a skilled attacker to be very dangerous. Treat elevated privileges like elevated accounts wherever possible.

Delegation -- just in time, just enough in just the right places -- can also help you smoke out the baddies, as they won’t likely know this policy. If you see a superaccount move around the network or use its privileges in the wrong place, your security team will be all over it.

Least privilege applies to humans and computers as well, and this means all objects in your environment should have configurations for the role they perform. In a perfect world, they would have access to a particular task only when performing it, and not otherwise.

First, you should survey the various tasks necessary in each application, gather commonly performed tasks into as few job roles as possible, then assign those roles as necessary to user accounts. This will result in every user account and person being assigned only the permissions necessary to perform their allowed tasks.

Role-based access control (RBAC) should be applied to each computer, with every computer with the same role being held to the same security configuration. Without specialized software it’s difficult to practice application-bound RBAC. Operating system and network RBAC-based tasks are easier to accomplish using existing OS tools, but even those can be made easier by using third-party RBAC admin tools.

In the future, all access control will be RBAC. That makes sense because RBAC is the embodiment of least privilege and zero admin. The most highly secure companies are already practicing it where they can.

Good security domain hygiene is another essential. A security domain is a (logical) separation in which one or more security credentials can access objects within the domain. Theoretically, the same security credential cannot be used to access two security domains without prior agreement or an access control change. A firewall, for example, is the simplest security domain. People on one side cannot easily get to the other side, except via protocols, ports, and so on determined by predefined rules. Most websites are security domains, as are most corporate networks, although they may, and should, contain multiple security domains.

Each security domain should have its own namespace, access control, permissions, privileges, roles, and so on, and these should work only in that namespace. Determining how many security domains you should have can be tricky. Here, the idea of least privilege should be your guide, but having every computer be its own security domain can be a management nightmare. The key is to ask yourself how much damage you can live with if access control falls, allowing an intruder to have total access over a given area. If you don’t want to fall because of some other person’s mistake, consider making your own security domain.

If communication between security domains is necessary (like forest trusts), give the least privilege access possible between domains. “Foreign” accounts should have little to no access to anything beyond the few applications, and role-based tasks within those applications, they need. Everything else in the security domain should be inaccessible.

The vast majority of hacking is actually captured on event logs that no one looks at until after the fact, if ever. The most secure companies monitor aggressively and pervasively for specific anomalies, setting up alerts and responding to them.

The last part is important. Good monitoring environments don’t generate too many alerts. In most environments, event logging, when enabled, generates hundreds of thousands to billions of events a day. Not every event is an alert, but an improperly defined environment will generate hundreds to thousands of potential alerts -- so many that they end up becoming noise everyone ignores. Some of the biggest hacks of the past few years involved alerts that were ignored. That’s the sign of a poorly designed monitoring environment.

The most secure companies create a comparison matrix of all the logging sources they have and what they alert on. They compare this matrix to their threat list, matching tasks of each threat that can be detected by current logs or configurations. Then they tweak their event logging to close as many gaps as possible.

More important, when an alert is generated, they respond. When I am told a team monitors a particular threat (such as password guessing), I try to set off an alert at a later date to see if the alert is generated and anyone responds. Most of the time they don’t. Secure companies have people jumping out of their seats when they get an alert, inquiring to others about what is going on.

Every object and application should have an owner (or group of owners) who controls its use and is accountable for its existence.

Most objects at your typical company have no owners, and IT can’t point to the person who originally asked for the resource, let alone know if it is still needed. In fact, at most companies, the number of groups that have been created is greater than the number of active user accounts. In other words, IT could assign each individual his or her own personal, custom group and the company would have fewer groups to manage than they currently have.

But then, no one knows whether any given group can be removed. They live in fear of deleting any group. After all, what if that group is needed for a critical action and deleting it inadvertently brings down a mission-dependent feature

Another common example is when, after a successful breach, a company needs to reset all the passwords in the environment. However, you can’t do this willy-nilly because some are service accounts attached to applications and require the password to be changed both inside the application and for the service account, if it can be changed at all.

But then no one knows if any given application is in use, if it requires a service account, or if the password can be changed because ownership and accountability weren’t established at the outset, and there’s no one to ask. In the end, this means the application is left alone because you’re far more likely to get fired for causing a critical operational interruption than you are letting a hacker stay around.

Most companies are stunted by analysis paralysis. A lack of consistency, accountability, and ownership renders everyone afraid to make a change. And the ability to move quickly is essential when it comes to IT security.

The most secure companies establish a strong balance between control and the ability to make quick decisions, which they promote as part of the culture. I’ve even seen specialized, hand-selected project managers put on long-running projects simply to polish off the project. These special PMs were given moderate budgetary controls, the ability to document changes after the fact, and leeway to make mistakes along the way.

That last part is key when it comes to moving quickly. In security, I’m a huge fan of the “make a decision, any decision, we’ll apologize later if we need to” approach.

Contrast that with your typical company, where most problems are deliberated to death, leaving them unresolved when the security consultants who recommended a fix are called in to come back next year.

Camaraderie can’t be overlooked. You’d be surprised by how many companies think that doing things right means a lack of freedom -- and fun. For them, hatred from co-workers must be a sign that a security pro is doing good work. Nothing could be further from the truth. When you have an efficient security shop, you don’t get saddled with the stresses of constantly having to rebuild computers and servers. You don’t get stressed wondering when the next successful computer hack comes. You don’t worry as much because you know you have the situation under control.

I’m not saying that working at the most secure companies is a breeze. But in general, they seem to be having more fun and liking each other more than at other companies.

The above common traits of highly secure companies may seem commonsense, even long-standing in some places, like fast patching and secure configurations. But don’t be complacent about your knowledge of sound security practices. The difference between companies that are successful at securing the corporate crown jewels and those that suffer breaches is the result of two main traits: concentrating on the right elements, and instilling a pervasive culture of doing the right things, not talking about them. The secret sauce is all here in this article. It’s now up to you to roll up your sleeves and execute.

Good luck and fight the good fight!

(www.infoworld.com)

Roger A. Grimes

Zur Startseite