(Originally posted on the Iron Mountain Information Advantage Blog, November 20 2013.)
Leaving things on the train or in a restaurant, or in fact anywhere is an unpleasant fact of life for many of us. I would guess that almost all the readers of this blog have at some point left their keys, wallet, shopping, hat, gloves, children, scarf or phone somewhere or other. On occasion, such lapses in concentration can be upsetting, costly, or embarrassing and in some rare instances even dangerous. But in most cases what we leave behind is either easily replaceable (gloves), insured/covered (bank cards) or worth the cost to change and replace (keys). It’s very rare that we leave and lose something irreplaceable (presumably you found the kids!). This is because the items we treasure often have significant intrinsic and/or emotional value. A good example would be family heirlooms, passed down from generation to generation; we treasure them and therefore take care to protect them, storing them in a safe (or at least a safe place) to be taken out only on special occasions.
What about leaving data somewhere? It wasn’t so long ago, that civil servants and the MOD were criticised frequently in the media for leaving highly sensitive and valuable data exposed in public places. Rarely, it seemed, did a day go by without the Daily Mail bemoaning the inability of the public sector to protect our data. Headlines called for heads to roll. And yet, invariably, these were just the kind of simple, human mistakes that every one of us have made in one way or other. These days, however, the vast majority of data is (or at least should be) encrypted, both when it is on the move and when it’s at rest. Consequently, the loss or theft of encrypted data may now raise fewer eyebrows.
Printed matter, however, is another thing entirely. You can’t encrypt paper documents, and paper is very difficult to secure during transport, without somehow physically attaching it to your person. Taking sensitive documents from one location to another, so often a necessity, quickly becomes a thing of peril. Conceptual drawings, designs, technical drawings, mock ups etc. will often need to be taken to a client site or a manufacturer, and sometimes cannot be sent electronically. After a successful pitch and a few celebratory drinks afterwards those documents could all too easily be left on the night bus to Neasden, unprotected and full of intellectual property and sensitive information. A breach like that can so easily turn a night of celebration into a morning of embarrassment and apologies, followed by the inevitable search for new clients.
Protecting printed documents is difficult, probably more difficult than electronic information, and yet we seem to put all of our efforts into the very latest and best encryption, protected USB keys, and expensive data loss prevention (DLP) initiatives. It’s easier to put in place a technology, especially a “transparent” one than it is to change behaviours.
I would suggest that the information security community needs to address this disparity; the paperless office hasn’t transpired, the digital documents are secured, but paper has been left behind. How can we address this without handcuffing briefcases to people? As usual, it has to come down to awareness, we need to drive home the message that paper should be transported with the same care as electronic records, observing sensible procedures such as ensuring there are always two people present when travelling with paper (to act as more of a reminder than as a physical protection) or even only couriering them with a specially selected and reviewed vendor.
I don’t want to turn the Chief Information and Security Officer into a George Smiley type character, but I do want all of our sensitive records to be treated with the same level of protection irrespective of format.
Rather than even attempt to do an end of show round up that other have been able to do far more successfully than me, here are the five things that I remembered the most from the week:
3M Visual Privacy
I still think 3M produce the best privacy filters for monitors, but I have been waiting a long time for technology to catch up and remove the unsightly and easily-left-behind at home piece of plastic in favour of a solution built into the screen itself. Whilst I didn’t unfortunately see that, one of the product managers assured me that this is exactly what the boffins at 3M are currently working on. This is going to be a huge step towards universal and transparent (forgive me) visual security for people using laptops in public places.
3M also surprised me by demonstrating a pice of software they have designed as well; the known problem with privacy filters is that they only protect you from people looking at your screen from your left or right. From directly behind you they can easily see your screen. The software uses the built in webcam to recognise the users face, and if another face appears in the background looking at the screen, pops up a warning to the user and blurs the screen. To be honest it was a little clunky when I saw it, and it is currently only being developed for Windows, but this is exactly the sort of environment that people working with sensitive information need to “watch their backs” almost literally. I hope they continue to refine the software and expound it to all other major platforms.
Security Bloggers Meetup
RSAC USA sees the annual meet up of the Security Bloggers Network, so i was very excited to be able to attend this year and witness the awards show and a great deal of silliness and nonsense (to whit, the “bald men of InfoSec” picture for one). I managed to meet for the first time a whole bunch of people that I have either conversed with or followed myself, and some of whom I have very much admired. No name dropping I am afraid as there is too much of that later on in this post, but one thing I did take away was that there is a very valid desire to harmonise the North American and European Security Blogger Awards moving forwards which can only be a good thing and build the international blogger network further. In fact, you can now nominate for the EU Security Bloggers awards here.
SnoopWall and Miss Teen USA
It wouldn’t be a security conference without some kind of booth babe furore and this one was no different. Although the presence of booth babes has dramatically reduced over the last few years there were still a few vendors insisting on using them. And then we thought we had hit a new time low with the presence of Miss teen USA, Cassidy Wolf, at the SnoopWall booth in the South Hall. Condemnation was rapid and harsh. BUT WAIT… THERE’S MORE TO THIS STORY THAN MEETS THE EYE! After I retweeted my feelings about a teens presence at a conference that could best be described as a recovering alcoholic when it come misogyny, I was contacted by Patrick Rafter, the
owner VP of Marketing of SnoopWall.
They have partnered with Cassidy to promote privacy amongst teens in complement to their product that detects the misuse of, for instance, the webcam on your computer or your phone. For those that may not know, Cassidy was the victim of blackmail from an ex classmate who hacked into her webcam in her bedroom, took photos and then demanded more pictures. It goes without saying she stood up to the blackmailer, and has since made privacy one of her “causes” during her tenure as Miss Teen USA. Was having her at their booth at RSA a little misjudged? Yes. Is their cause and campaign (and software for that matter) actually have very good intentions? Absolutely. I chatted with Patrick a day later and while he acknowledged how Cassidy’s presence could have been misinterpreted, he strongly defended her presence and her intentions. I honestly found it laudable. Hopefully over the next few years as the industry finally sorts out its booth babe problem, people like me won’t be jumping to the wrong conclusions as we assume the worst.
The Thomas Scoring System
A few months ago I posted about Russell Thomas’ approach to risk management. I had the good fortune to meet with Russell at the Security Bloggers Meet Up and chatted in depth about his approach to measuring risk consistently. He has turned this idea into a very practical approach via an Excel spreadsheet, a point I made in my earlier review. This is important because without a way to implement at a very practical level it remains a theory. The following day Russell was kind enough to walk me through how to use the system in practical terms, and I am going to be trying it out in my day job as soon as possible. I would urge you to take a look at the Thomas Scoring System as I strongly believe it is a great way of bringing metrics together in a meaningful way.
Gene Kim & The Phoenix Project
I was fortunate enough to have been introduced to Gene Kim, the founder of Tripwire, author, DevOps enthusiast and all round genius/nice guy a few months ago, and we had chatted a couple of times over Skype. (Gene is very generously offering me his guidance around writing a book and his experiences publishing it; yes you heard it here first folks, I intend to write a book!) Knowing he was at RSA I was able to seek him out, and I can now say I have met one of my InfoSec heroes. He is a genuinely charming, funny and generous guy, and he was good enough to sign a copy of his book, The Phoenix project, as well as allow me to get a selfie with him. I would strongly encourage everyone in this field, as well as many of those not in it to read The Phoenix Project, as it quite literally changed the way I looked at the role of InfoSec in a business, and that wasn’t even the main thrust of the book.
It has taken me nearly a week to recover from RSA, but despite the scandal and boycotts and minor demonstrations it was an excellent conference, as much for the presentations as the “hallway track”. As always, my thanks to Javvad for being my conference wing man again.
Now it is back to real life.
Last thursday, 20th February saw me take a day of work and attend the Enterprise 451 Information Security Executive Summit and also present there in the afternoon. Although I didn’t make it until lunchtime (due to meetings about another conference later this year) it was still a cracking afternoon, with plenty of opportunities to chat to the analysts, see some of their latest research and also have roundtable discussions with the vendors.
Often these roundtables can be quite hard work as the vendors can pitch quite hard to get their moneys worth from sponsoring the day, but the ones I attended were handled very well and actually put the focus on the attendees discussions rather than the products. Of note in this regard was Palo Alto, and I enjoyed the vigorous discussion, despite it running over by 10 minutes as we all rallied to make our own points!
I presented on “Sailing the C’s of Disaster Planning”, a heavily rewritten and much improved presentation from its first outing at 44CON in September. I have noticed a step change in my presentation style since 44CON as I grow more confident in what it is I am saying and less reliant on the slides and bullet points. This results in a more engaging slide deck and a more fluid style of presentation. informal feedback has been good so far, although I am looking forward to getting mor formal feedback if it becomes available.
The after conference dinner and drinks were also excellent and resulted in me almost missing my last train home.
This week sees me at RSA USA for the first time which is hugely exciting; it is a huge conference (one of the organisers told me up to 24,000 people) and even on the soft start day is incredibly busy. I will no doubt write up my eexperiences here in th enext few days.
Gavin Holt, who I was fortunate enough to be mentor to for last years BSides London Rookie track, invited me to submit a talk for Securi-Tay3, the third annual security conference hosted by the University of Abertay and run by the Abertay Hackers Society. He is the Vice President of that society and responsible for drumming up trade for the conference. Securi-Tay has a reputation for being Scotland’s biggest security conference, and this year attracted something like 170 people putting it well on a par with many ‘professional’ conferences.
I duly did as I was told and submitted into the CFP.
The day was great; the conference was well managed and run, there were always plenty of volunteers in distinctive blue (and not black for once!) T-shirts who were friendly and willing to help. Vitally there was always a cup of tea available in the reception area, throughout the day, something so many conferences miss when you are working the hallway track rather than the advertised tracks. This is one Englishman who has traditional standards…
As expected there was a very strong technical slant to the presentations (many of them given by people called Rory it seems as well) and some of them were beyond me. In fact I tweeted the following day saying that the one downside to the conference was that I often felt like the dumbest person in the room.
I was able to present on “Throwing Shapes for Better Security Risk Management”, a wholly revamped version of a talk I did at the IT Security Forum late last year. When I first gave it I had some great feedback from Jitender Arora which I tried to address, as well as the formal feedback from the session (basically “good content but not what was promised”). Securi-Tay kindly recorded the talk which I will post shortly, although with the microphone cutting out there is only so much you can hear. Feedback afterwards was very positive, and I had some great conversations with people not just about risk management but presentation style generally.
Two other presentation also stood out for me; Ritesh Sinha and Paco Hope‘s “The Colour of Your Box: The Art and Science of Security Testing” and Rory McCune’s “Crossing the Mountains of Madness – How to Avoid Being a Security Cultist”. These will also be available at the Securi-Tay YouTube channel shortly.
This was a great conference, attended by people who truly wanted to learn and engage rather than just get out of the office for the day, and who are actively pursuing a career in the infosec industry. What did surprise me though was the number of people from the day who wanted to get more involved with risk management as a career option rather than the more technically focussed, ethical hacking option which at first glance would appear to be the defacto choice. The honesty and passion of all of the students there was very refreshing, and I thoroughly enjoyed chatting to everyone at the after party, all the way through the inevitable kebab on the way back to the hotel.
I decided to write a review of a paper submitted to wired.com on the subject of “An Approach to Risk Decision Making” by Curt Dalton. I must however declare an interest in this, in that I happen to report to Curt in my day job (he is global CISO), and that he was kind enough to share drafts with me as he wrote it for feedback. This will of course therefore be a somewhat biased review, although not too much, but I do hope if nothing else it generates conversation around topics and approaches like this. I have a huge respect for Curt, have learnt much from him over the last few years and hope to get a good score in the next performance review!
In essence, this model is designed to help an orgnaisation decide if it is financially viable to invest in security technology/controls/procedures in order to address a given risk. It is not designed to be used across an organisations risk management porogramme, but rather with those handful of risks that can’t be addressed in day to day operations and have to be escalated to senior management to be effectively resolved. With limited budget and access to that senior leadership, this approach provides support and guidance on what to ‘fix’ and what not to fix.
This scope is a key element of the model; it uses very traditional approaches to monetizing risk versus the more in vogue approach I have reviewed elsewhere in this blog. To that end it uses assigned numerical values to elements of its calculations; this is of course where ‘errors’ may creep in, but in theory an experienced risk manager familiar with their environment should be able to assess this reasonably well.
In summary, the model is as follows:
Figure 2 in the model requires an analysis of controls required to address a risk.
This does of course beg the question, how do you know you have all of the controls required and how do you know you have selected the correct numerical value? Again, the pragmatist in me suggests this is entirely possible with someone who is familiar with the environment and the organisation, but this may of course be more difficult in other situations.
Figure 3 does a similar thing with a similar level of granularity, i.e. defining in nine increments the ease of exploitation of a given risk; where I think there is potentially something missing is that this value applies to ALL of the risks listed in figure 2 rather than individually.
Obviously this would massively increase the complexity of the solution but this is a deliberate approach to ensure simplicity across the model.
These two numbers are then combined with a simple calculation of impact to etsablish a level of monetized risk. Finally, the 80/20 rule (or Pareto’s Principal) is used as a rule of thunmb to define the actual budget that should be spent to mitigate a risk. In the example given therefore a monetized risk of roughly $1.5m USD should be mitigated by spending up to $380k USD and no more. The Pareto Principal can of course be adjusted accoring to your organisations risk appetite, that is, the more risk averse the organisation the more the rule would move from 80/20 to 70/30 or 60/40 etc..
There are a lot of assumptions used in this model, not least the numerical values that may seem to be arbitrarily assigned. However, I believe this can be forgiven for the very simple reason that this is a pragmatic, transparent and easily understood approach; it can be easily transferred into an Excel spreadsheet meaning that some simple modelling can be carried out. I have said before that until the newer approach to risk management has a more easily understood and implentable approach it will not be adopted. This model does.
The other part to this model that I like is that it is not designed to be a cure all, but rather a tool to help organisations decide where to spend money. If the approach is understood then an informed decision can be made within the constraints of that model (or indeed any other model). I believe it is influenced by the ISO27005 approach to risk management which means many risk management folks will be able to grasp and adopt it more easily.
Overall, this is a model that can be adopted quickly and easily by many organisations, and implemented successfully, as long as its basis in assigning numerical values is understood, and calculations are carried out by those in a position to understand their risk profile well. I would strongly recommend you tai a look at the model yourself over at Wired Innovation Insights.
Pros – easily understand, pragmatic, focussed on one business issue, easily implemented.
Cons – relies on assigning ‘arbitrary’ numerical values, doesn’t address granularity of risk and ease of exploiutation.