As someone whose primary function at work is the ‘management’ of risk in all of its glorious forms, I have over the years become very comfortable with its accepted definition and how to measure it. ISO 27005:2008 was my bible, giving me the flexibility to choose a schema that worked for my particular environment as well as the credence that I was doing it right. I always knew that assigning arbitrary numbers to things wasn’t exactly the most scientific way of actually measuring something, but I could deal with that by simply talking about “indicative values” and “helps with prioritisation”.
It was a little under two years ago at the RSA conference that I attended a talk entitled “Pimp My Risk Model: Getting Resilient in a Complex World” by David Porter, and he spoke about a new approach to risk modelling. Rather than focussing on what could happen, and then play that through to the conclusion of an impact that is then measured, it instead focussed on what the desirable outcomes were in the first place and then worked backwards establishing what was required to achieve them, basically dependency modelling. Not only was this more efficient and scalable as not all permutations of threat/vulnerability/asset (for instance) are required to be worked out, it provides better information for early decision making.
The concept is not new, and has its roots in the late last century in the financial markets/actuaries who were looking at better ways to model and manage risk.
There are a number of proponents to this approach, all of whom have a far better understanding than me of this approach, but despite this in the last two years I have simply not seen it in a practical form that can be used every day. Unfortunately, and I am sure I am not alone here, if I can’t implement it quickly it gets passed over for the next best thing that can be. In fact, and perhaps in my own blinkered universe, the approach itself barely raised a murmour since. And yet the concept had stuck with me especially on the few occasions when I had heard it talked about.
It was on Russell Thomas’s blog, exploringpossibilityspace, that I saw just the other day this very approach being touted again. What I enjoyed about this post was the balanced and educational view of the traditional approach (little “r” approach in Russells’s parlance) versus the new dependency modeling approach (big “R”). I think the criticism of ‘r” methods is well founded, although it is widely understood in business and when used properly can help produce at the very least tactical indicators of risk to the business.
My challenge with the ‘R’ approach is that I have yet to see it applied in practical terms and in a way that is easy to digest and understand (I think I hurt myself about two thirds of the way down the article trying to get to grips with the concepts!). As a result therefore, getting business buy in is going to be extremely challenging. Partial information from an ‘r’ approach reaching the business successfully is going to be better than no information from an ‘R’ approach (however better the data is) reaching the business.
I would strongly recommend everyone to read Russell’s writings on this risak model, which also contains links to other resources as well.
There is more work to be done, but I hope it focuses on making it possible to use the approaching a day to day environment; they say there is nothing new in the world of information security, but I have high hopes for an approach to risk modeling that will allow me to do so much more for the business in terms of long term, strategic guidance and support.
And when I can use this model in Excel, count me in!
<Some of you have commented on my extended absence, but a busy few weeks followed by a lovely holiday camping in France took priority. Back in the saddle now and very much looking forward to your comments and feedback!>