Command, Control, and Conquer

Back in the ’90s, there was a game released called Command and Conquer, a strategic game whereby you had to manage resources, build, train and mobilise armies and conquer the neighbouring armies. It was a classic that spawned many spin-offs, sequels and addons for decades. What struck me about it though was how multi-skilled you had to be, especially in the later levels.

You couldn’t just be an excellent Field Marshall as you also had to manage resources, cash and other materials to create your buildings and structures that allowed you to create your army in the first place. You had to know logistics, how long something would take to build, train and mobilise, look into the future at new locations for better access to materials, and also have plans in place if the enemy attacked before you were ready.

Essentially, you were skipping from one crisis to the next, finely balancing between success and crashing failure. It sounds a lot like any modern-day incident management situation really.

In this week’s The Lost CISO (season 2), I take a quick look at incident management and highlight four key points to remember during an incident. In case you haven’t seen it yet. here it:

The bottom line is that, much like in the Command & Conquer game, you could plan ahead what you were doing because the environment was constantly changing, the unknowns were stubbornly remaining unknowns and the literal (in the case of the game) fog of war meant you can’t see more than just a few steps ahead. There are though some keys to success.

The first key point is that having a plan is all well and good, but as my military friend regularly tell me;

no plan survives contact with the enemy

Why? Because the enemy much like life does random, unexpected and painful things on a regular basis. Incidents have a habit of doing the same thing, so if your plan is rigid, overly explicit and has little room to ad-lib or manoeuvre in, it will fail.

Therefore, my approach has always been to build any kind of plan around four simple areas:

  • Command
  • Control
  • Communication
  • Collaboration

In other words, decide who is in charge, decide who is responsible for what areas, ensure everyone knows how to talk to each other, ensure everyone works openly and honestly with everyone else. There may be some other details in there as well, but really, if you have these four areas covered your plans will remain flexible and effective, and you may find yourself being able to close incidents more quickly and efficiently.

With all that extra time on your hands, you can then spend some time basking under the Tiberian sun.


Shameless Coronavirus Special Promotion – Risk Edition!

iu-18Many, many moons ago, my good friend and learned colleague Javvad Malik and I came up with a way to explain how a risk model works by using an analogy to a pub fight. I have used it in a presentation that has been given several times, and the analogy has really helped people understand risk, and especially risk appetite more clearly (or so they tell me). I wrote a brief overview of the presentation and the included risk model in this blog some years back.

And now the Coronavirus has hit humanity AND the information security industry. Everyone is losing their minds deciding if they should self isolate, quarantine or even just generally ignore advice from the World Health Organisation (like some governments have shown a propensity to do) and carry on as usual and listen to the Twitter experts. During a conversation of this nature, Javvad and I realised that the Langford/Malik model could be re-purposed to not only help those who struggle with risk generally (most humans) but those who really struggle to know what to do about it from our own industry (most humans, again).

Disclaimer: we adopted the ISO 27005:2018 approach to measuring risk as it is comprehensive enough to cover most scenarios, yet simple enough that even the most stubborn of Board members could understand it. If you happen to have a copy you can find it in section E.2.2, page 48, Table E.1.

Click the image to view in more detail and download.

The approach is that an arbitrary, yet predefined (and globally understood) value is given to the Likelihood of Occurrence – Threat, the Ease of Exploitation, and the Asset Value of the thing being “risk measured”. This generates a number from 0-8 going from little risk to high risk. The scores can then be banded together to define if they are High, Medium or low, and can be treated in accordance with your organisation’s risk appetite and risk assessment procedures.

In our model, all one would have to do is define the importance of their role from “Advocate” (low) to “Sysadmin” (high), personality type (how outgoing you are) and the Level of human Interaction your role is defined as requiring. Once ascertained, you can read off your score and see where you sit in the risk model.

In order to make things easier for you, dear reader, we then created predefined actions in the key below the model based upon that derived risk score, so you know exactly what to do. In these troubled times, you can now rest easy in the knowledge that not only do you understand risk more but also what to do in a pandemic more.

You’re welcome.

Note: Not actual medical advice. Do I really need to state this?


Consistency, consiztency, consistancy…

It will come as no surprise to most of you that I travel a lot to other countries, and as such I am a frequent visitor of airports and more memorably, the security procedures of those airports.

Every country has their own agency that manages this process, either outsourced or kept within government. Given the complexities of international and aviation law, I can well imagine the difficulties of staying abreast of the latest advice from a variety of different sources and applying it in a globally consistent way. But surely it can’t be that difficult, especially when it comes to the basics?

Here are just some of the more egregious examples of inconstancy that I have encountered around the world:

  • One airport that confiscated my nail scissors, despite the fact I had been carrying them (and had the case searched) through numerous security checkpoints before. The blade size was within accepted norms, except at this airport.
  • The security official that made me take my 100ml or less liquids out of the clear plastic case/bag I was using and put them into a clear plastic ziplock bag for scanning. I had been using that case for months, and continue to use it without issue to this day.
  • The security line where I din’t have to take off my shoes or belt, nor remove laptops or liquids from my bag because “we have a sniffer dog”. In fairness they did have a dog running up and down the line, but I started to doubt it’s ability to smell knives or similar in my case.
  • Having travelled through five airports in four days, the final airport insisted that I take the camera out of my bag, as it is “standard practise in our country to do this”. Not before or since has it been a practise I have experienced, let alone a standard one.
  • Finally, the multiple security personnel who tell me to leave my shoes on, only to be told as I go through the scanner to take my shoes off and put them on the belt to be x-ray’ed.

It goes without saying that I approach every security checkpoint with a mixture of hope, despair and disdain, and always leave with one of those feelings prevalent. Obviously this is an analogy to our world of infosec, perhaps even a tenuous one, but I do feel it is one worth expressing.

How we guide our organisations to interpret and carry out the policies and regulatory requirements they are beholden to is vital to the attitude and approach the employees will take. Uncertainty breeds many things, in this case doubt and anxiety about how to behave. If a policy is not implemented consistently then how can it be observed consistently? If we are constantly surprising our users then we can’t blame them for feeling jumpy, anxious or unsure, and therefore critical of the service being provided.

Cat-Cucumber-Gif-Gifs-Youtube-Video

Consistency is a very powerful tool to ensure people understand the policies, the purpose and the even the vision of an security organisation. As soon as there is doubt the very purpose of your security organisation is thrown into doubt. For example, why is BYOD allowed for senior execs and not for the rest of the organisation? Or why is a Mobile Device Management solution enforced on some parts of the business and not the other? In both these cases it only encourages the working around of the restrictions that subsequently weaken your security posture.

That is not to say exceptions cannot be made, that is why every policy etc. should have an exceptions statement. After all, expecting a policy to cover all eventualities is simply wishful thinking.

I dare say we all have inconstancies, but it is in all of our interests to drive them out of our organisation wherever possible. Otherwise, you will have people like me wondering what kind of ordeal I am going to have to endure just to get my day job done, and that doesn’t help anyone.

 


Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest of environments.

Having seen two shuttles first hand in the last nine months (the Enterprise on USS Intrepid in New York and the Atlanta at Kennedy Space Centre), it boggles my mind that something so big could get into space and back again, to be reused. Facts like the exhaust from each of the three main engines on the shuttle burn hotter than the melting temperature of the metal the engine ‘bells’ are made of (they ingeniously pipe supercooled fuel down the outside of the bells to not only act as an afterburner of sorts but also cool the bells themselves) go to show the kind of engineering challenges that needed to be overcome.

There was one incident however that really struck me regarding the relationship between the crew onboard and the crew on the ground. On the Shuttle’s maiden flight into space, STS-1 also known as Columbia carried out 37 orbits of the earth with two crew on board, mission commander John W. Young and pilot Robert L. Crippen. Once orbit was achieved an inspection of the critical heat tiles on the underside of the shuttle showed some potential damage. If the damage was too extensive the return to earth would (as later events in the Shuttle’s history proved) be fatal.

The crew however were tasked with a variety of other activities, including fixing problems onboard they could address. They left the task of assessing and calculating the damage to those on the ground who were better equipped and experienced to deal with the situation. This they duly did and as we know Columbia landed safely just over two days later.

It struck me that this reflects well the way information Security professionals should treat the individuals we are tasked with supporting. There is much that individuals can do to help of course, and that is why training and awareness efforts are so important, but too often it is the case that “we would be secure if it wasn’t for the dumb users”. The sole purpose of the Columbia ground crew was to support and ensure the safe return of those on board STS-1 so that they could get on with their jobs in space. Ours is the same.

Just because te crew had extensive training to deal with issues as they arose, the best use of their time was to focus on the job in hand and let ground crew worry about other problems. The people we support should also be trained to deal with security issues, but sometimes they really need to just get on with the deliverables at hand and let us deal with the security issue. They might be trained and capable, but we need to identify when the best course of action is to deal with their security issues for them, freeing them to do their work.

Never forget that we support our organisations/businesses to do their jobs. We provide tools to allow them to be more effective in their end goals but it is still our responsibility to do the heavy lifting when the time comes. Except in very rare cases we are there because of them, not in spite of them.

(Photo courtesy of William Lau @lausecurity)