What 80’s pop can teach us about Rocket failure and incident management

image

Most accidents originate in actions committed by reasonable, rational individuals who were acting to achieve an assigned task in what they perceived to be a responsible and professional manner.

(Peter Harle, Director of Accident Prevention,Transportation Safety Board of Canada and former RCAF pilot, ‘Investigation of human factors: The link to accident prevention.’ In Johnston, N., McDonald, N., & Fuller, R. (Eds.), Aviation Psychology in Practice, 1994)

I don’t just read infosec blogs or cartoons that vaguely related to infosec, I also read other blogs from “normal” people. One such blog is from a chap called Wayne Hale who was a Fligh Director (amongst other things) at NASA until fairly recently. As a career NASA’ite he saw NASA from it’s glory days through the doldrums and back to the force it is today. There are a number of reasons I like his blog, but mostly I have loved the idea of space since I was a little kid – I still remember the first space shuttle touching down, watching it on telly, and whooping with joy much to my mother’s consternation and chagrin. The whole space race has captured my imaginaion, as a small child and an overweight adult. I encourage anyone to head to his blog for not only fascinating insider stories of NASA, but also of the engineering behind space flight.

What Wayne’s blog frequently shows is one thing; space is hard. It is an unforgiving environment that will take advantage of every weakness, known and unknown, to take advantage and destroy you. Even just getting into space is hard. Here is Wayne describing a particular incident the Russians had;

The Russians had a spectacular failure of a Proton rocket a while back – check out the video on YouTube of a huge rocket lifting off and immediately flipping upside down to rush straight into the ground. The ‘root cause’ was announced that some poor technician had installed the guidance gyro upside down. Reportedly the tech was fired. I wonder if they still send people to the gulag over things like that.

This seems like such a stupid mistake to make, and one that is easy to diagnose; the gyro was in stalled upside down by an idiot engineer. Fire the engineer, problem solved. But this barely touches the surface of root cuse analysis. Wayne coniTunes;

better ask why did the tech install the gyro upside down? Were the blueprints wrong? Did the gyro box come from the manufacturer with the ‘this side up’ decal in the wrong spot? Then ask – why were the prints wrong, or why was the decal in the wrong place. If you want to fix the problem you have to dig deeper. And a real root cause is always a human, procedural, cultural, issue. Never ever hardware.

What is really spooky here is that the latter part of the above quote could so easily apply to our industry, especially the last sentence – it’s never the hardware.

A security breach could be traced back to piece of poor coding in an application;

1. The developer coded it incorrectly. Fire the developer? or…

2. Ascertain that the Developer had never had secure coding training. and…

3. The project was delivered on tight timelines and with no margins, and…

4. As a result the developers were working 80-100 hrs a week for three months, which…

5. Resulted in errors being introduced into the code, and…

6. The errors were not found because timelines dictated no vulnerabiliy assessments were carried out, but…

7. A cursory port scan of the appliction by unqualified staff didn’t highlight any issues.

It’s a clumsy exampe I know, but there are clearly a number of points (funnily enough, seven) throughout the liufecycle of the environment that would have highlighted the possibility for vulnerabilities, all of which should have been acknowledged as risks, assessed and decisions made accordingly. Some of these may fall out of the direct bailiwick of the information security group, for instance working hours, but the impact is clearl felt with a security breach.

A true root cause analysis should always go beyond just the first response of “what happened”? If in doubt, just recall the eponymous words of Bronski Beat;

“Tell me why? Tell me why? Tell me why? Tell me why….?”


Why do we put brakes on cars? Perhaps not for the reason you think.

Bosch Predictive Emergency Braking System

I have never liked the analogy;

Why do we put brakes on cars? So we can go faster. Therefore we put security controls in place so we can do riskier things.

I mean, I get it, the analogy makes sense, but like many analogies, if we are not careful they are likely to become a little too one dimensional. We also have brakes on cars to slow down for traffic lights, to ensure we don’t go too fast and run into the back of  the car in front, and also to stop the car quickly to avoid someone crashing into us. I am sure with a squeeze and a shove we could fit these analogies into an infosec analogy, but why bother?

I was reminded of this particular analogy and why I don’t like it this morning as I read my paper. The headline really resonated with me;

‘Living rooms’ on wheels put drivers at risk

The Times, Monday 23rd February 2015

The Times, Monday 23rd February 2015

The article discusses how the increase in technology in cars has actually led to an increase accidents in recent years. The anti-lock brakes, stability control etc. is creating complacency amongst users, and putting them and others at risk.

If we are not careful we are shifting towards this in our industry. It is of course a good thing to focus on secure coding practises, OWASP, secure by design etc., because that is as important as a seat belt and an air bag in a car (oops, see how easy it is?!), but if we try and put everything into those particular controls, we are abdicating responsibility away from the user more and more. By creating an insulated and isolated environment in which they operate there is no positive/negative feedback loop, no opportunity to learn from mistakes, near misses or even dumb good luck. They quite literally are on their own being guided only by what their immediate vicinity is reporting to them. Another quote;

They are as uninvolved in the process as they can possibly be

This could be describing our users and clients who we are removing more and more responsibility from when it comes to making sensible, thought out decisions about basic security. We are removing their perceived responsibilities as they say to themselves “if the system is letting me do this, it must be alright” as they download malware specifically designed to undermine so called built in security. (Actually the quote is from Peter Rodger, chief examiner for the institute of Advanced Motorists commenting on cars being turned into living rooms.)

Let us continue to understand how mature our security development framework is, let’s observe the OWASP top ten, but let’s also continue to establish clear guidelines, education and expectations of our people at the same time. If we don’t, we may be congratulating ourselves little too early for running a good security programme.

If we do that, we risk going back over a century in time, and putting the cart before the horse, let alone putting better brakes on the car.

(If you want good analogies however, that can help your people truly understand the information security environment they are operating in, head over to the The Analogies Project.)

Securi-Tay IV

TransparentLogo1-e1423236103647I will be spending the end of week with the Abertay University Ethical Hackers at their Annual Securi-Tay conference in Dundee. It’s a great conference so if you are at a loose end for Friday and in the area make sure you rock up and say hello to the lovely folks up there!


“Compromise” is not a dirty word

compromise

If it wasn’t for the users we could secure the company much more easily.

or

They just don’t get it, we are doing this for their benefit.

We often hear statements like this being made, and sometimes even uttered by ourselves. In fact I daresay they are often made by people in very different support industries, not just information security, but it seems that we harbour these feelings more than most.

Effective security is security that is understood, adhered to and respected. Ineffective security is either too lax, or so tight that individuals do their level best to work around it. They are not working around it because they are subversive elements in our organizations, but rather because it is restricting them from getting their day jobs done; it has become a barrier.

Each organization will have it’s own unique requirements, and even within that organization unique requirements will come about. The finance and legal teams are likely to require a different level or type of security around their work than a creative or IT team. If you have ever observed a creative team in full flow you will understand that the concept of a “clear desk” policy is not only laughable but also extremely restrictive to the very fundamentals of their craft. That same policy however will be more easily understood and accepted by the aforementioned finance and legal teams.

So in this example do you enforce an organisation wide clear desk policy? Probably not. It may make sense to have a departmental one, although in some circumstances this would be harder to police. Or you could implement clear desk “zones”, i.e. areas where it is not necessary to have a clear desk because of other measures. The measure may be soft, such as background checks on cleaning staff or hard, such as supervised cleaning staff.

Variations to blanket policies always cost money, but if you ascertain the potential financial value of that loss and compare it to the cost of the measures you can help your business to understand, adhere and respect the measure you are proposing.

This doesn’t just apply to physical security (although it very frequently does!) but also to technical and administrative controls too. Policies have to be very carefully written and reviewed by the various stakeholder of your organisation to ensure the right balance is struck. Technical controls also have to have this balance. Data Loss protection (DLP) is a marvelous technology that when implemented correctly can reap huge rewards and avoided risks, but it is expensive and time consuming to install and run. Who should ultimately make that decision, you, or the business. (clue, it’s not you).

Don’t be afraid to compromise in your dealings with your organisation. If they disagree with your approach, they either get it and feel it is simply the cost of doing business, in which case go off and look at other ways to support them. Or they don’t get it, which means you need to do a better job of convincing them of the risk in which case, go off and look at other ways of making your point. A good compromise is made when each party respects and aligns to the other parties point of view, not when each party is on fundamentally different sides.

Help your business respect and align to the information security ideals you hold dear, and do the same for theirs and you will always get more effective security.


Risk, Rubble and Investment

rubbleOriginally written and posted October 13th 2014 on the InfoSecurity 2014 Blog (and reiterating a pet core message of mine  again!).

Risk is a bad thing. Therefore risk needs to be reduced to rubble, or even better to dust and then swept away under the carpet never to be seen again.

This is the attitude that many of us have, and then pass onto our senior leadership when it comes to information security programs. “Invest £10 million and we will buy technology that will make us safe” we have often said in the past. “My blinky boxes will soon find your risks and reduce them to nothing!”. It should be no surprise for so many of our industry therefore that CISO stands for “Career Is So Over”.

What we often fail to appreciate is that the senior leadership and boards of virtually all organizations understand risk far better than us. They deal with financial, legal, HR and international risk on a regular basis, and know how to take advantage of it to their benefit. Their advisors in the various fields know how to communicate their unit risks in a way that makes sense to business, be it financial, reputational or whatever else makes sense in their industry. The leadership do not require specialist knowledge of these areas because the risk is being translated into terms they understand.

The information security industry however still often talks in terms of “APT’s”, “DLP”, “TLS” and other obscure TLA’s* while trying to explain why more money is needed to “secure all the things”. What is the benefit to the business? What is the real risk in terms everyone can understand? Translating these technical issues and risks into business risks has always been a challenge and has often resulted in information security being perceived as the “expensive part of IT” asking for more money with little positive influence to the business.

If you work in a brewery, the ultimate goal of everyone who works there should be to sell more beer. If you work for Oxfam, the ultimate goal is to get aid to those that need it as quickly, effectively and efficiently as possible. If you work in a publicly listed company, the ultimate goal is to make more money for the shareholders. The role of information security within any organization is not exempt from this; security doesn’t get a special pass because it is, well, security. The role of the information security function is to support the ultimate goal of the organization it operates in.

Understand what your ultimate goal is. Focus your strategy on ensuring you are helping meet that goal. Be willing to compromise in certain areas of security if it helps meet that goal. Ensure you senior leadership understand the risks (in their language, not yours) involved in those compromises. if you don’t get what you want then move onto the next piece of work that supports your ultimate goals (or be prepared to fight harder and more lucidly for your original cause).

If it was that easy you wouldn’t be reading this, but surely it is easier than the ongoing battle for investment that we ultimately never win anyway?

*Three Letter Acronyms (surely you know that?)


Computing SecurityNote: Many of you know I was up for the “Personal Contribution to IT Security” Award at the recent Computing Security Awards. I was (un)fortunately Runner Up in this category, but thank you again to all of you who not only may have voted for me but also nominated me in the first place. It was a wonderful evening with good friends from my work and InfoSec life, and a good excuse to dress up in my best party frock. Here’s to next year!

IMG_4119


Not All Risks Are Bad (even the bad ones…)

Keep_Calm_Big_ThinkThe very term ‘risk” often makes people feel uncomfortable, with connotations of bad things happening and that if risk is not minimized or removed then life (or business) becomes too dangerous to continue.

Crossing the road is risky, especially if you live in a busy city, and yet people, young and old alike, do it every day. In fact it is riskier than flying  and yet I would argue that there are more people afraid of flying that of crossing the road. Hugh Thompson of RSA put it very well in his 2011 RSA Conference Europe presentation when he raised the issue of “Sharkmageddon”; more people are killed every year sitting on the beach by falling coconuts than those by sharks, but there is an almost universal fear of sharks. We irrationally consider swimming in the sea safer (less risky?) than sitting under a coconut tree.

Risk is an inherent part of our lives, and if we let the realities of risk take control of our business decisions we become the corporate version of an agoraphobic; staying in the safe confines of the environment  we know and not ever venturing out to be active in the outside world; ultimately we wither and fail be it as individuals or as a business.
In my experience, one of the most misunderstood approaches to treating a risk is to accept or manage it. Most people are comfortable with mitigating, transferring or avoiding a risk as they involve some kind of act to deal with them, something we are all familiar with. We fix a problem, give the problem to someone else or stop doing the thing that causes us the problem in the first place. However, it often feels wrong to simply accept a risk, in essence to do nothing. Although this is not strictly the case, it is essentially how we feel we are dealing with it. You are accepting that there is either nothing you can do, or nothing you are willing to do to reduce the risk. However, you are not blindly accepting it at face value; rather you are being cognisant of the risk as you continue your operational activities. You know it is there as you carry on your day job. These activities and the very environment you are operating in can change without notice, and make the decision to accept a risk now the wrong course of action.

For instance, it may now be cheaper to fix the risk than it was going to cost you, or the highly lucrative contract that made the risk acceptable is now over and there is a greater risk of financial lost that costs more than the revenue you are bringing in. The reasons for change are often financial, although not always. Your risk appetite may also have reduced or the industry you are operating in becomes more regulated; all of these example mean your decision to accept needs to be reviewed.

All risk decisions need to be reviewed regularly, for exactly the reasons given above, but in my opinion it is risk acceptance decisions that should be reviewed more often, as they are the ones that are made as a result of more transient and changing factors, and are the ones that will potentially harm the organisation the greatest.

tiger__extIt’s a bit like keeping a tiger as a pet – it looks awesome and maybe even draws admiring glances from many, but if you forget you locked it into your bathroom overnight you are going to have a very big surprise when you get up to go to the toilet in the middle of the night. You can’t accept risks without truly understanding them in the first place.