You, Me, and Dystopia

We all remember the Ocean’s 11 styles of antics that criminals can emulate to gain access to IoT devices and, subsequently, the enterprise network on which they are hosted. It may have been an isolated incident, but it underscores that ANY vulnerability can be exploited.

The question of “why should we be bothered now?” begs to be answered, given that these risks have been around for a long time. But, interestingly, the 2020 COVID lockdown (and subsequent ones) and the impacts it had on the supply chain may help us to answer this question with surprising clarity.

Do you remember how difficult it was to get hold of toilet paper, pasta and hand gel in March of 2020? Panic buying meant that the supply chain struggled to meet demand; combined with the “just in time” supply models employed by most manufacturers and retailers, stocks were diminished quickly with no replenishment in sight. So far, so what, right?

According to the UK’s Office for National Statistics, there are well over 8,000 small to medium sized food suppliers in the UK (probably exacerbated by the gig economy as well). How many companies of this size do you know of that have a robust cybersecurity programme in place?

This puts them at a significant disadvantage when it comes to recognising a cyber-attack and defending against it. Given the fish tank scenario from my last blog, it is no stretch of the imagination to see circumstances whereby chilled and perishable goods are sabotaged and destroyed, either in situ or in transit. Remote monitoring is rapidly becoming the norm and will reduce costs and effort, something any small business would jump at. So protecting these environments, the sensors, and the control devices from the get-go becomes critical.

The incentives to disrupt and destroy the supply chains are sometimes manifest, but only occasionally. Terrorism, both domestic and international, will always try and attack a nation’s weakest point. But there are other threats to consider as well.

The (fairly) recent global lockdowns and various actions carried out by governments worldwide have changed the business and planetary ecosystem, and not always for the better. Without commenting on the politics of the situations themselves, activism has been on the rise globally, with people taking to the streets to defend their particular viewpoints and air their grievances.

The hacker group, Anonymous, are the epitome of so-called “hacktivism”, using their collective skills to disrupt and expose governments and corporations. Their particular flavour of activism involves attacking their targets and exploiting their weaknesses for political and social leverage. So again, it doesn’t take a leap of the imagination to see these current troubling times being a catalyst for more hacktivism, attacking vulnerable supply chains through their reliance on IoT technology.

The positive impact of technology always needs to be balanced against the sociological and cultural impractical it may have, as well as the environment in which it operates. With the commoditisation of security testing capabilities and offensive technological tools, the ability to attack and exploit weaknesses in the supply chain becomes open to the general populace. If that populace suffers a more significant division of wealth and disenfranchisement, the risk of the supply chain being attacked is greater.

Ocean’s 11 suddenly becomes The Hunger Games; the implications of an insecure supply chain vulnerable to attack can have severe consequences for what we consider to be our ‘normal’ lives. So taking precautions now to protect our society’s lifelines must be imperative.

Links to other interesting stuff on the web (affiliate links)

Introducing Cyber Advisor

BSidesAustin 2023: CyberSecurity In The Texas Tech Capital

Understanding ‘Lone Wolf’ Attacks Dissecting and Modeling 2022’s Most Powerful Cyber Attacks


CISO Basics, Part 2

In the last post, I looked at some of the less apparent activities upon becoming a new CISO, namely:

  1. Stop thinking that infosec is your business.
  2. Stop making technology purchases.
  3. Ask your vendors to explain what you have in your services inventory.

In this post, we will take this a step further and closer to actual business as usual and maintaining your security team as a functional part of the organisation.

Don’t say “NO!” to everything.

This is an obvious thing to do, but it is much harder to do in practice. The reality is that this requires a complete change in mindset from the traditional view of the everyday CISO. As a species, the CISO is a defensive creature who is often required to back up every decision and be the scapegoat of every mistake (see One CISO, Three Envelopes https://thomlangford.com/2014/12/01/three-envelopes-one-ciso/) and generally rubber-stamp choices that are out of their bailiwick and control.

The mindset shift requires a leap of faith wholly because of this perceived threat of blame and accountability when, in fact, it does just the reverse. 

It starts naturally enough with the language that is used by the CISO and the team, for instance, changing the Change Approval meeting to the Risk Review meeting and not communicating a yes/no or go/no-go response to changes but rather a level of risk associated with the request and alternative approaches as appropriate. There is a need to communicate this shift in the culture, of course, but people will see that they are accountable for decisions that affect the business, not the security team. Shifting the mindset away from being a gatekeeper to a security team that provides sensible and straightforward advice based upon clearly understood risk criteria is a fundamental step towards avoiding being known as the Business Prevention Unit. Politely correct other’s language when they mention an action that requires sign-off or approval from “Security” and help them understand their role in the business decision.

This approach does not require a snap of the fingers for 50% of the problems to go away. Still, carefully planning and educating your stakeholders alters the impact you can have on the business dramatically for the better. It also allows you to more easily draw a line between the activities of the security team and the company’s performance, all for the price of merely no longer saying “no”.

Stop Testing Your Perimeter

What? Are you serious?! 

Absolutely.

As you enter a new environment, you will be taking many critical pieces of information on trust and from people with vested interests in their careers, livelihoods and reputations. Your arrival upsets the status quo and has the potential to disrupt the equilibrium; all reasons to not always be forthcoming with every piece of information you request. It isn’t about people being dishonest or deliberately misleading you, but merely being complex, multi-faceted human beings with multiple drivers and influences.

Your perimeter is one of the fundamental pieces of your information security puzzle. Despite cries of “the perimeter is dead”, it remains a prominent place for attacks to happen and where you should feel fully confident that you know every node in that environment to the best of your ability.

Whatever your testing cycle is, suspend it for some time and conduct as complete an investigation as possible into precisely what your perimeter comprises. It can be done automatically with discovery tools, manually through interviews with those responsible, visually in data centres (where you have old school “tin” still being used, and any combination of the above. You will likely find devices that you, and probably existing team members, weren’t aware of, especially with the proliferation of the Internet of Things devices being used throughout the enterprise now. Did facilities install a new access control system or room booking system? Did they consult IT, or more to the point, you?

It sounds like the stuff of legend or the script to the Ocean’s 11 movies, but do you remember when a Las Vegas casino was broken into… through their fish tank? Knowing what devices are where on your network and perimeter is vital and must be considered table stakes in any decent security programme. An alternative is simply a form of security theatre that gives the impression of security and does nothing but create a false sense of security. A cycle of no testing is worth discovering what you don’t know because you can do something about it.

Building your plan

Now you have a grip on your environment in a relatively straightforward, simple, effective and quick way. Through this process, you will ascertain your stakeholders, advocates and even a few potential adversaries. Then, armed with this information, you can provide an accurate picture of the business to the business in a way that makes sense and displays a grasp of the fundamentals.

Building your plan will always start with your initial assessment and what needs to be done to become operational or steady-state. The trick, however, is to ensure that this baseline achievement is perceived as the end state of security but rather merely the first stepping stone to ever more impressive services, capabilities and ultimately, profit and growth for the company.

The plan itself, however? That is yours and yours alone. Although other posts in this Blog will help as you plot your course into the future, nothing will replace your understanding of the local culture, organisation and, ultimately, what you need to achieve to meet the expectations of the business leadership. Know what the rules of your organisation are, when to adhere to them, when to bend them, and most importantly, when to break them (but only when experience tells you it is the right thing to do):

“The young man knows the rules, but the old man knows the exceptions.” 

Oliver Wendell Holmes

Be the Old Man, be the CISO.

Links to other interesting stuff on the web (affiliate links)

5 Ways Penetration Testing Reduces Overall Security Costs

Avoiding Security Theater: When is a “Critical” Really a Critical?

Game of Life Security and Compliance Edition


Busy Doing Nothing?

When you are faced with managing third-party risks, it can feel like a Sisyphean task at best. Even a small organisation is going to have  20+ third parties and vendors to deal with, and by the nature of a small business, absolutely not a full-time person to carry them out. As an organisation grows, at the other end of the extreme there will be many thousands of vendors and third parties in different countries and jurisdictions; even a large team is going to struggle to deal with that volume of work.

In The Lost CISO this week I talk about how to manage a third-party risk management programme from the perspective its sheer volume of work.

The key to dealing with this volume is, of course, to take a risk-based approach, and consciously decide to do nothing about a large proportion of them. It sounds counter-intuitive, but then a risk-based approach to anything can seem counter-intuitive. (Why would you “accept” a high-level risk for goodness sake?!) In this case, you would quite literally be putting some effort into deciding what not to do:

We’re busy doing nothing.

Working the whole day through.

Trying to find lots of things not to do.

Busy Doing Nothing, written by Jimmy Heausen-Van & Johnny Burke

This means your best approach is to filter who you absolutely must assess, who you should assess, and who can be reasonably ignored. In theory, the last group will be the majority of your third parties. How you filter is of course down to what is important to your organisation, industry, clients, the data you hold, the physical location of your environment (office or hosted) and any other criteria you can consider. Ultimately, it is what is important to your organisation, not what is important to you as a security person. Why? Because if security has the final say, there is a potential for a conflict of interest and the limiting of the organisation to operate effectively and efficiently. Here is a sample list of criteria you can sort your third parties by:

  1. Do they have access to our client’s (or our client’s customers) confidential/sensitive data?
  2. Do they have access to our confidential/sensitive data?
  3. Do they have data access to our IT infrastructure?
  4. Do they have physical access to our premises?
  5. Is our organisation reliant on their services being available at all times?

Inside each of these selected criteria, you may wish to refine further; in answer to the question, think “yes, but…” and you may find a particular vendor does not make your list as a result.

Congratulations! You have now hopefully reduced your third-parties needing to be assessed by hopefully about 80%. If that is not the case, go back to the beginning and validate your criteria, perhaps with business leadership themselves, or (ironically) a trusted third-party.

This may well still leave a formidable list to get through, so there are some more tricks you can use.

When assessing some of the larger third-parties (think Apple, Google, Microsoft etc.), you may wish to accept their certifications on face value. The chances of getting a face to face meeting and tour of the facility, whilst not impossible, are remote, and very much dependent upon how much you spend with them. The more reputable vendors will be transparent with their certifications, findings and general security programmes anyway.

You can then use this filter again with the slightly less well-known vendors but include a handful of questions (no more than fifteen) that you would like answered outside of certifications.

The smallest vendors with the least formal certification and publicly available can be presented with a more detailed set of “traditional” third-party risk questions. Make sure they are relevant, and certainly no more than 100 in total. You are better off getting a good idea of most of the vendor environments from a returned questionnaire than you are a perfect idea of a handful of environments from a barely returned questionnaire. The idea here is to get a consistent, medium level view across the board in order to spot trends and allocate your resources effectively.

Still overwhelmed with sheer volume? If this is the case, look to a three-year cycle rather than an annual cycle. You can reduce the workload by up to two-thirds this way, but you may wish to consider that some vendors are simply too crucial to have on this kind of cycle.

So all that is left is to ensure all of this is carefully monitored, tracked and managed. For instance, what are you going to do with a vendor that doesn’t meet your standards?

And that, my friends, is for another blog.

(You can download a sample third-party security questionnaire from the (TL)2 security Downloads area. There will be more templates arriving soon that you can download and use for yourself, or you may wish to contact (TL)2 if you would like some help and support in creating a third-party risk programme.)

 

 


Keeping It Supremely Simple, the NASA way

Any regular reader (hello to both of you) will know that I also follow an ex NASA engineer/manager by the name of Wayne Hale. Having been in NASA for much of his adult life and being involved across the board he brings a fascinating view of the complexities of space travel, and just as interestingly, to risk.

His recent post is about damage to the Space Shuttle’s foam insulation on the external fuel tank (the big orange thing),and the steps NASA went through to return the shuttle to active service after it was found that loose foam was what had damaged the heat shield of Columbia resulting in its destruction. His insight into the machinations of NASA, the undue influence of Politics as well as politics, and that ultimately everything comes down to a risk based approach make his writing compelling and above all educational. This is writ large in the hugely complex world fo space travel, something I would hazard a guess virtually all of us are not involved in!

It was when I read the following paragraph that my jaw dropped a little as I realised  that even in NASA many decisions are based on a very simple presentation of risk, something I am a vehement supporter of:

NASA uses a matrix to plot the risks involved in any activity.  Five squares by five squares; rating risk probability from low to high and consequence from negligible to catastrophic.  The risk of foam coming off part of the External Tank and causing another catastrophe was in the top right-hand box:  5×5:  Probable and Catastrophic.  That square is colored red for a reason.

What? The hugely complex world of NASA is governed by a five by five matrix like this?

Isn’t this a hugely simplistic approach that just sweeps over the complexities and nuances of an immensely complex environment where lives are at stake and careers and reputations constantly on the line? Then the following sentence made absolute sense, and underscored the reason why risk is so often poorly understood and managed:

But the analysts did more than just present the results; they discussed the methodology used in the analysis.

It seems simple and obvious, but the infused industry very regularly talks about how simple models like a traffic light approach to risk just don’t reflect the environment we operate in, and we have to look at things in a far more complex way to ensure the nuance and complexity of our world is better understood. “Look at the actuarial sciences” they will say. I can say now i don’t subscribe to this.

The key difference with NASA though is that the decision makers understand how the scores are derived, and then discuss that methodology, then the interpretation of that traffic light colour is more greatly understood. In his blog Wayne talks of how the risk was actually talked down based upon the shared knowledge of the room and a careful consideration of the environment the risks were presented. In fact the risk as it was initially presented was actually de-escalated and a decision to go ahead was made.

Imagine if that process hadn’t happened; decisions may have been made based on poor assumptions and poor understanding of the facts, the outcome of which had the potential to be catastrophic.

The key point I am making is that a simple approach to complex problems can be taken, and that ironically it can be harder to make it happen. Everyone around the table will need to understand how the measures are derived, educated on the implications, and in a position to discuss the results in a collaborative way. Presenting an over complex, hard to read but “accurate” picture of risks will waste everyone’s time.

And if they don’t have time now, how will they be able to read Wayne’s blog?

 

 


Ground Control to Major Thom

I recently finished a book called “Into the Black” by Roland White, charting the birth of the space shuttle from the beginnings of the space race through to it’s untimely retirement. It is a fascinating account of why “space is hard” and exemplifies the need for compromise and balance of risks in even the harshest of environments.

Having seen two shuttles first hand in the last nine months (the Enterprise on USS Intrepid in New York and the Atlanta at Kennedy Space Centre), it boggles my mind that something so big could get into space and back again, to be reused. Facts like the exhaust from each of the three main engines on the shuttle burn hotter than the melting temperature of the metal the engine ‘bells’ are made of (they ingeniously pipe supercooled fuel down the outside of the bells to not only act as an afterburner of sorts but also cool the bells themselves) go to show the kind of engineering challenges that needed to be overcome.

There was one incident however that really struck me regarding the relationship between the crew onboard and the crew on the ground. On the Shuttle’s maiden flight into space, STS-1 also known as Columbia carried out 37 orbits of the earth with two crew on board, mission commander John W. Young and pilot Robert L. Crippen. Once orbit was achieved an inspection of the critical heat tiles on the underside of the shuttle showed some potential damage. If the damage was too extensive the return to earth would (as later events in the Shuttle’s history proved) be fatal.

The crew however were tasked with a variety of other activities, including fixing problems onboard they could address. They left the task of assessing and calculating the damage to those on the ground who were better equipped and experienced to deal with the situation. This they duly did and as we know Columbia landed safely just over two days later.

It struck me that this reflects well the way information Security professionals should treat the individuals we are tasked with supporting. There is much that individuals can do to help of course, and that is why training and awareness efforts are so important, but too often it is the case that “we would be secure if it wasn’t for the dumb users”. The sole purpose of the Columbia ground crew was to support and ensure the safe return of those on board STS-1 so that they could get on with their jobs in space. Ours is the same.

Just because te crew had extensive training to deal with issues as they arose, the best use of their time was to focus on the job in hand and let ground crew worry about other problems. The people we support should also be trained to deal with security issues, but sometimes they really need to just get on with the deliverables at hand and let us deal with the security issue. They might be trained and capable, but we need to identify when the best course of action is to deal with their security issues for them, freeing them to do their work.

Never forget that we support our organisations/businesses to do their jobs. We provide tools to allow them to be more effective in their end goals but it is still our responsibility to do the heavy lifting when the time comes. Except in very rare cases we are there because of them, not in spite of them.

(Photo courtesy of William Lau @lausecurity)