Mobile security Q&A: Securing the mobile minimum viable app

12.02.2016
As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to more speedy development workflows, such as the Minimum Viable Product (MVP) , which essentially calls for mobile development teams to focus on the highest return on effort when compared to risk when choosing apps to develop, and features to build within them. That is: focus on apps and capabilities that users are actually going to use and skip those apps and features they won’t.

Sounds simple, but what does that mean when it comes to security We know application security is one of the most important aspects of data security, but if software teams are moving more quickly than ever to push apps out, security and quality assurance needs to be along for the process. 

The flip side is minimum apps and features could mean less attack surface. To get some answers on the state of mobile app security and securing the MVP, we reached out to Isaac Potoczny-Jones research lead, computer security with a computer security research and development firm Galois. 

Potoczny-Jones has been a project lead with Galois since 2004, is an active open source developer in cryptography and programming languages. Isaac has led many successful security and identity management projects for government organizations including (Navy, DOD), (DHS), federated identity for the Open Science Grid (DOE), and mobile password-free authentication (DARPA), and authentication for anti- forgery in hardware devices (DARPA). 

Please tell us a little about Galois and your role there in security.

Galois is a computer security research and development firm out here in Portland, Ore. We do a lot of work with the US federal government, been around since 1999 and I've been here for 11 years now. I think a lot about this topic, I really appreciate and employ myself the lean methodologies for product development, and I love the lean startup approach. I also do security analysis for companies, so I've gone into a number of start-ups too and looked at their security profile for their products or their infrastructure, and help them to develop a security program. I've definitely seen both sides of the issue as far as where MVP thinking leads you. 

What are you seeing out within organizations today when it comes to mobile security

There's definitely a lot more development in mobile happening. The best practices in mobile aren't as well developed as best practices for the web. That's getting a little bit better.Consider HTTPS. What we saw for quite some time was something that on the Web is relatively straightforward, which HTTPS is. People were doing it wrong on mobile for years before anyone really noticed. There's a lot you can get wrong with HTTPS, and they were getting it all wrong. As people move over to mobile they are definitely having to relearn some of the lessons we learned over the years. 

Password security is another one of those. People began to make passwords on websites a lot more robust. You can't just have a four or five letter password anymore on most websites. But because mobile devices are so difficult to type password into, a lot of sites have relaxed those password rules. In reality, the threat is just the same as it always has been. 

What impact do you see the minimum viable product, or minimum viable app, trend

On the MVP front, there's a very fascinating challenge with security because security is a non-functional requirement. I tend to like the lean scrum methodology. I don't know if you're familiar with that one, but I can use that one as an example. They're all kind of similar in some ways. They emphasize features, they emphasize things the users can see. They emphasize testing out ideas, and getting them into the market. Testing them, gathering metrics about how effective they are, and using that as feedback into the product. That's a really good idea about how to develop a product. But because even just the terminology, minimum viable product, it is really emphasizing minimizing.

It emphasizes getting rid of what you don't need. Those things together, minimizing things and really having an emphasis on what the user can do and see, that makes it so that non-functional requirements are kind of an afterthought. You have to squint to figure out how to apply non-functional requirements like security to a lot of these processes like scrum. 

I would imagine with an MVP teams want to move the app out as quickly as possible, so they don’t want to spend a lot of time threat modeling and going through a lot of additional process, because that’s all adding to more development time. So there seems to be a natural friction between the goals of MVP and good security. 

It's absolutely a friction. It's challenging because securing is mostly invisible. That means good security and bad security look exactly the same, until something goes wrong. Security is really visible when something is broken or somebody gets hacked and then you make the news. Then it kind of blows up in your face. We've seen this a few times, I don't know how many start-ups it's killed, it's probably killed a few, but it's definitely cost a lot of start ups when their first major news coverage is that they were hacked.

What are some ways organizations can ease that tension when it exists Is there a way to bring security in so it's not too obtrusive Is there a way to separate out apps by data type And possibly greenlight MVP apps that don’t touch more sensitive data, and give a closer look at those apps that do

I think that's a good approach. As you point out, one way is to say, let's see if we can do an MVP with data that's not as sensitive so you won't have to focus as strongly on security. Nowadays, it's a little more challenging. Even the minimum things you do you will need security. It kind of doesn't matter what your data is, you will get targeted, you will get attacked, and even if it's just with these automated bots that run around the Internet attacking everything. They'll use your infrastructure for sending spam at the very least, if that's all they can do. To me, the approach is you have to implement some of the industry best practices as far as the OWASP Top 10. You have to believe that security is an important part of a minimum viable product to start to even begin to get these user stories in there. 

What I like to tell people is think about user stories, even negative user stories or things like that are, as a user, I don't want to see my personal information leaked on the internet because I've shared something sensitive in your app or your website, I've stored something sensitive in your website. I don't want to see that in the hands of people who will use my private information against me.

That sounds like something a security team could put a guide together, or put in place a checkpoint on whether an app can go through. For instance, if the app has certain conditions that are true, or one of these conditions that are true, the app has to go through a security review. If not, it’s OK for a security light approach within certain guidelines.

That'd be perfect. Typically these lean approaches have at least some kind of testing methodology built in, or acceptance testing. Or, as some of them call say, "What's your definition of 'done'" The first step is just saying, "We're going to include security in these definitions of done," and once you've at least penetrated that level, which I don't think a lot of people have, but once they get that, then they’re going to at least do the right things. You're either going to start to build it either into the user stories or the acceptance testing. 

But you can’t leave it to just be at the end of the process. If you leave security acceptance testing toward the end, and naturally your schedule is going to slip. Then you'll get to the security testing and find there's a lot more work to do. Then you'll be in this unfortunate decision of either having to fix things and let your schedule slip, or choose to let something go out the door that's not secure.

The real tragedy is when a system is kind of inherently insecure, it was built in a really insecure way that requires major rework, because you didn't think about security at the beginning. A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it's going to be be.

What are some indications organizations could look for that would indicate that they’re doing this right

If you're looking at your to-do list, whatever that to-do list is, whether it's a list of stories or a big list of tasks and action items, you should be recognizing some security issues in there, as you go. You'll get to a point, you're developing something and one of your developers hopefully will say, "Well, look, our system is vulnerable to whatever cross site request forgery, cross site scripting attack. Which any system that's not designed to protect against it, is going to be.

If you look at your bug list, you should see that pop up there at some point. Some of these security issues will come up during development, because nothing will be perfect. That'll be an early indicator.

If you don't have anything, if you look at your bug list and you don't see anything, if your developers aren't actively talking about security or saying, "We're going to have to add some tasks for security," you're going to say, "Well, I want to add that feature for you but that's going to have an impact on security." If you're not hearing it as part of the conversation, then there's going to be a problem.

(www.csoonline.com)

George V. Hulme

Zur Startseite