Disclaimer: My comments below are based upon quotes from both Twitter and The Times of London on the UK’s TalkTalk breach; as a result the subsequent investigation and analysis may find that some of the assertions are in fact incorrect. I will post clarifying statements should this happen to be the case.
I am not normally one to pick over the bones of company A or company B’s breach as there are many people more morbid and qualified than me to do so, and I also hate the feeling of tempting fate. All over the world i would guarantee there are CISOs breathing a sigh of relief and muttering to themselves/psychoanalyst/spouses “thank god it wasn’t us”. Bad things happen to good people, and an industry like ours that tends to measure success on the absence of bad things happening is not a great place to be when those bad things appear to happen far more frequently than ever before.
So it took me a while to decide if I should write up my feelings on TalkTalk’s breach, although I had Tweeted a few comments which were followed up on.
Initially I was shocked that people are still using the same password across so many crucial accounts. After a ten minute rant in the car about it with my wife, she calmly (one of the many reasons I married her) explained that not everyone thinks like me as a security professional, and that I should remember my own quote of “convenience eats security for breakfast”. Having calmed down a little, I was then shocked by something else. That something else was when the TalkTalk CEO, Dido Harding was on national television looking clearly exhausted (I can only imagine how much sleep she had been getting the last few days) giving out unequivocally bad advice such as “check the from address on your emails, if it has our address it is from us”. Graham Cluley’s short analysis was spot on here:
As if TalkTalk’s customers hadn’t gone through enough, they are then being given shoddy advice from someone in a supposed position of trust that is going to put them at even more risk. The scammers and phishers must have been rubbing their hands with invisible soap and glee as they prepared their emails and phone calls.
Now, the attack it seems did not disclose as much information as was first though, which is good news. So credit card numbers were tokenised and therefore unusable, so no direct fraud could be carried out there (again dependent upon the form of that tokenisation which I am sure there will be more details on in the coming months). Bank details were however disclosed, but again, there is a limited amount of damage that can be done there (there is some I acknowledge, but it takes time and is more noticeable… another time for that discussion). Here is the Problem Number One though; with Harding’s poor advice, many people subsequently (and allegedly) fell for phishing attacks through either phone calls or emails, and lost hundreds of thousands of pounds. TalkTalk’s response? Credit monitoring.
And then we move to Problem Number Two; Why weren’t the bank details stored safely? Why were they not encrypted? Armed with the knowledge of customers bank account details scammers can make a much more convincing case that they are actually from TalkTalk, especially if other account information was also lost (time will tell). TalkTalk’s response?
So TalkTalk was technically compliant? Shouldn’t this kind of thinking be consigned to the same mouldering scrapheap where “we’ve always done it this way” and “we’re here to secure the business, not help it” lay? I sincerely hope that this episode will at the very least highlight that “compliance” and “security” are two very different things and that the former most certainly doesn’t automatically result in the latter. What has transpired is the perfect storm of a breach, unforgivably poor advice, and complacency based upon compliance and resulted in the pain of a lot of people involving large amounts of money.
If an example like this does not spur you into doing more as regards your own security awareness activities, then please go back to the beginning and start again. Why? I have been accused of “victim blaming” somewhat (see the above Tweets), but if individuals had an ounce of sense or training they wouldn’t have fallen for the subsequent scams and been more careful when responding to email supposedly from TalkTalk. I will leave the last word to Quentin Taylor, and as you carry on with your internet residencies, don’t forget you need to wear protective clothing at all times.
Most accidents originate in actions committed by reasonable, rational individuals who were acting to achieve an assigned task in what they perceived to be a responsible and professional manner.
(Peter Harle, Director of Accident Prevention,Transportation Safety Board of Canada and former RCAF pilot, ‘Investigation of human factors: The link to accident prevention.’ In Johnston, N., McDonald, N., & Fuller, R. (Eds.), Aviation Psychology in Practice, 1994)
I don’t just read infosec blogs or cartoons that vaguely related to infosec, I also read other blogs from “normal” people. One such blog is from a chap called Wayne Hale who was a Fligh Director (amongst other things) at NASA until fairly recently. As a career NASA’ite he saw NASA from it’s glory days through the doldrums and back to the force it is today. There are a number of reasons I like his blog, but mostly I have loved the idea of space since I was a little kid – I still remember the first space shuttle touching down, watching it on telly, and whooping with joy much to my mother’s consternation and chagrin. The whole space race has captured my imaginaion, as a small child and an overweight adult. I encourage anyone to head to his blog for not only fascinating insider stories of NASA, but also of the engineering behind space flight.
What Wayne’s blog frequently shows is one thing; space is hard. It is an unforgiving environment that will take advantage of every weakness, known and unknown, to take advantage and destroy you. Even just getting into space is hard. Here is Wayne describing a particular incident the Russians had;
The Russians had a spectacular failure of a Proton rocket a while back – check out the video on YouTube of a huge rocket lifting off and immediately flipping upside down to rush straight into the ground. The ‘root cause’ was announced that some poor technician had installed the guidance gyro upside down. Reportedly the tech was fired. I wonder if they still send people to the gulag over things like that.
This seems like such a stupid mistake to make, and one that is easy to diagnose; the gyro was in stalled upside down by an idiot engineer. Fire the engineer, problem solved. But this barely touches the surface of root cuse analysis. Wayne coniTunes;
better ask why did the tech install the gyro upside down? Were the blueprints wrong? Did the gyro box come from the manufacturer with the ‘this side up’ decal in the wrong spot? Then ask – why were the prints wrong, or why was the decal in the wrong place. If you want to fix the problem you have to dig deeper. And a real root cause is always a human, procedural, cultural, issue. Never ever hardware.
What is really spooky here is that the latter part of the above quote could so easily apply to our industry, especially the last sentence – it’s never the hardware.
A security breach could be traced back to piece of poor coding in an application;
1. The developer coded it incorrectly. Fire the developer? or…
2. Ascertain that the Developer had never had secure coding training. and…
3. The project was delivered on tight timelines and with no margins, and…
4. As a result the developers were working 80-100 hrs a week for three months, which…
5. Resulted in errors being introduced into the code, and…
6. The errors were not found because timelines dictated no vulnerabiliy assessments were carried out, but…
7. A cursory port scan of the appliction by unqualified staff didn’t highlight any issues.
It’s a clumsy exampe I know, but there are clearly a number of points (funnily enough, seven) throughout the liufecycle of the environment that would have highlighted the possibility for vulnerabilities, all of which should have been acknowledged as risks, assessed and decisions made accordingly. Some of these may fall out of the direct bailiwick of the information security group, for instance working hours, but the impact is clearl felt with a security breach.
A true root cause analysis should always go beyond just the first response of “what happened”? If in doubt, just recall the eponymous words of Bronski Beat;
RSA has had a tough few years; the subject of a high profile phishing attack in March 2011 resulting in the loss of information related to their SecureID product. They denied it was an issue until three months later when information gained from that attack was used against other companies, including Lockheed Martin, and had to subsequently replace a large number of the tokens.
In September this year they recommended that customers of their BSafe product should stop using the built in, default, encryption algorithm because it contained a weakness that the NSA could exploit using a backdoor and therefore would be vulnerable to interception and reading. How very open and forthright of RSA I thought at the time. Despite the potential damage they may be doing to their brand by giving this information freely out, they are doing so in their customers interests and at the same time offering secure alternatives. It reminded me of the early nineties and the pushback against the Clipper chip, with RSA at the forefront protecting client interests and pushing back against the spooks of the three letter agencies of the USA. Here is what D. James Bidzos said at the time:
“We have the system that they’re most afraid of,” Bidzos says. “If the U.S. adopted RSA as a standard, you would have a truly international, interoperable, unbreakable, easy-to-use encryption technology. And all those things together are so synergistically theatening to the N.S.A.’s interests that it’s driving them into a frenzy.”
Powerful stuff. The newly formed Electronic Frontiers Foundation would have been proud.
Now this is where it gets interesting and has raised the shackles of many in the Twittersphere and internet echo chambers. A few days ago it was revealed that the real reason for RSA to have used a flawed products for so many years was because the NSA paid them to. It wasn’t a huge amount of money although it possibly helped save the division that runs BSafe in RSA that was struggling at the time.
Businesses change. Leadership changes. Market forces steer a company in different direction to one a degree or another. To my mind though, to deliberately weaken your own product for financial gain is extraordinarily unwise. By taking the money, RSA have declared that profit is above patriotism, whatever your view of patriotism is. If they took no money at all, there would be a good defence that the decision was taken in the national interest and to work harmoniously with the governmental agencies that protect the USA from danger. Unfortunately organisations that have relied on RSA’s products to secure their data have been let down simply to make a fast buck,
In October this year Art Coviello spoke about “Anonymity being the enemy of Security” at his Keynote at RSA Europe. That statement takes on a very different viewpoint now.
The response has been fairly unanimous, but here is one that got me thinking about my relationship with RSA:
I personally wouldn’t go this far as I go to network with friends, peers and colleagues, as well as listen to folks from the industry talk and present; I don’t necessarily go to listen to RSA as such. However this kind of reaction is going to have an impact on RSA that is likely to be felt for a number of years to come. Most security people I know are somewhat distrusting in the first place (hence why they are in security very often!). To have these revelations is going to have an impact both in their mainstream business as well as their conference business, so often seen as the gold standard of conferences globally.
If the last few years were tough for RSA, what is the next few years going to be like for a giant in our industry?
I have just returned from 44CON, a technical infosec conference that is held in London and in its third year. As with any multi day conference you come back tired but educated, and happy but deflated that it is over. A speaker party, a conference after party, two gin’o clocks, a conference bar and some fabulous presentations makes for an exhausting two days.
Organisationally it is extremely well run; the crew are are friendly, knowledgable AND efficient (it’s rare to have all three), the venue is of a high quality, the sponsors are low key but available, SpeakerOps is excellent, and with the exception of myself and two others the attendees are amazingly smart and technical. I was able to chat to a number of the speakers at a reception on Wednesday night, and the level of detail they went into for their research was simply mind-blowing; one person even decided to write his own 3D presentation language instead of using PowerPoint or Keynote, just for this one presentation!
I spent the first day mostly at the InfoSec track rather than the technical track, learning about “Security lessons from dictators in history” and “Surviving the 0-day – reducing the window of exposure”, both very good. I did attend a technical talk in the afternoon along with two friends (the two mentioned above!), and to be honest he could have been speaking a different language with what he was talking about; to make it worse he apologised at the end for not making it technical enough! It was a fabulous talk though, wonderfully presented, and let down only by my lack of technical knowledge of the subject.
As a backup speaker for the infosec track I thought I was off the hook at this point as nobody had dropped out, but it was announced at this point that there would be a “hidden track” of talks, of which I was one of them. This hidden track would take place at an undisclosed location and you had to talk to vendors and other con goers to find out where it was. It was at this point I excused from the after party to add a little more content to my slides.
The following morning, after the opening presentation I was second in the hidden track. My talk was entitled “Sailing the C’s of Disaster Planning”, and the main drive of it was of a simple “framework” that allows you to be be able to not only test the effectiveness of your disaster/business continuity planning, but also help to communicate the key elements of the plan upwards to the board and down through the key players in the organisation. This was the first time I had given this talk, and to be honest some of the ideas have not quite been fleshed out, although the concept is sound. It was well received by about 20 people (not bad given it was a hidden track) and there were some good questions and conversations afterwards. Feedback received later in the day was both encouraging but also useful in highlighting areas that need to be improved.
A copy of the slides are above; if you take a look at them please provide feedback as always (caution, 12.5Mb PDF).
I will be using this blog to flesh out those ideas and gather feedback over the next couple of months, firstly by looking at the high level concepts of this approach, and then subsequently break down the five elements of the approach into further blog posts.
The remainder of the second day at 44CON was taken up with more talks, as well as a bit of filming with my two colleagues, the two unknown hosts you could say, for something we hope to release in the next few weeks.
I would like to thank Steve and Adrian and the entire crew of 44CON for an excellent event, and I am certainly coming back for next year, at a new, larger yet undisclosed location.
It is certainly not the case that I think there should be no procedural documentation, or even detailed documentation, as long as it is in the right place and appropriate to the people requiring it. That said, i think the default approach to any implementation of crisis, incident or disaster recovery plans leads to a vast amount of needless writing. Having been involved in a programme to simply document what a particular team does with over thirty documents being created from scratch I can testify to the futility of that approach. Hence I propose the two tier approach to writing these plans up.
Tier two documentation is that which is required by the functional team; in the case of disaster recovery it is the detailed documentation of how to fail over applications and services. With crisis management it may be evacuations plans, roles and responsibilities of fire wardens, and with incident management it might be an escalation and first fix path of procedures. This is important, because in many of these cases the people involved in the ground are often in twenty four hour shift patterns and early in their career, or even volunteers (fire wardens etc.), and through no implicit fault of their own have less incentive to fully memorise or become proficient in activities that might never happen on their shift. They need to have a reference document, a thing they can refer to when their pulse is pounding and their heart pumping in the middle of a crisis. I should know, I was that soldier in my first job out of university!
However, there is a group of people that simply can’t be told to have documentation available to hand when the time comes, or even to memorise the roles and responsibilities; the senior leadership who actually make many of the critical decisions during a crisis. What is required here is ability to Communicate and Collaborate very quickly (optimally within just a few minutes of the crisis being recognised), and then have the capabilities at hand to establish rigourous Command & Control. This approach applied to most organisations (except perhaps the behemoths like IBM or TCS where different segments of the organisation could operate like this where the input of most if not all of the C level execs is required
These execs need to be involved in crisis no matter what the subject because what they are good at is synthesizing information from a variety of sources and being able to make decisions quickly, effectively and in the best interests of the company and its people.
Some pre-requisites to this approach though:
- A recognised approach to define the severity of a crisis prior to declaration.
- A mechanism of simultaneously contacting multiple people through redundant channels in a matter of seconds of a crisis being declared.
- A series of very simple yet effective steps for the crisis team to follow.
- The ability to manage a “crisis room” either real or virtual at no notice.
- The recognition that a crisis is by its very nature flexible, and therefore understanding you will not know all the facts from the outset (the “fog of war” effect).
I will investigate this in more detail in a later article, but for the time being, the main question anyone should ask themselves when prparing crisi plans is “how can i simplify this further?”.