In Part 1 we explored the stakeholders and the construction of a risk register. But those risks need to be managed in a realistic way. There's an iterative step here and that is the cycle of mitigation planning that takes place. I won't dwell too much on that element here as mitigation planning will fall largely to the risk owners and how they see those mitigating actions taking shape. However, there’s an element of this process that is vital to the Op Risk Manager and that is the effect that the mitigation actions have on the company’s exposures. Let’s use some examples to explore the effect of mitigation on the companies perceived risk.
The key elements to consider in mitigation planning are how effective are the actions being taking at reducing risk and whether or not they have actually been carried out. This means taking the temperature of the risks again and using some form of Expected Values estimate like Impact and Probability to re-score the risks as if the mitigation actions had been carried out. This gives you both an idea of how effective the mitigation actions are but also what risks remain that still need addressing. However, this Post Mitigation Score is only relevant once the mitigating actions are carried out. It is all very well having a huge risk to the organisation that gets scored down to a low impact risk, but only if those mitigating actions have actually been completed.
Herein lies one of the biggest pitfalls for our Indiana Jones Op Risk Manager: risks will appear to dissolve on the Risk Register due to significant mitigating actions but those actions never take place. Rather than reduce risk to the organisation, the risk is now magnified - at least at the start of the process the risk was visible, but now the risk is effectively off the radar and able to grow and change under the cover of the all encompassing risk register which says not to worry about it.
Take your Op Risk Fedora off and put on your Project Managers hard hat and confirm those actions are carried out, getting evidence if you have to for those larger risks.
The Board and other external stakeholders operate at the opposite ends of the same spectrum but ultimately want the same thing - comfort that they understand the risks associated with the business that they oversee.
Lets assume then that you now have a clear and well understood set of risks for the entire organisation. For some organisations, the production and maintenance of the Risk Register will be as far as the process goes for the Op Risk Manager, with the detailed analysis being handled in other parts of the business. Whether your role hands off these risks to other teams or you are responsible for the end to end process, it is still important to understand 'what happens next' so that your input is as valuable and useful as possible. This means entering the world of Risk Analytics...
The subject of Risk Analytics could take up another 10 articles like this one and this only scratch the surface of a subject that is both technically deep and ripe with different approaches from which to pick. Rather than skim over the top or pick a single approach we'll cover some of the most useful techniques and I'll point you to some excellent articles where you can take your discovery further if you so which.
So far we've created the risk register and that in itself begins to expose the nature of the risks the business is facing and we have already applied some form of quantitative tool in the shape of applying Impacts and Probabilities. This basic scoring process provides a way for all stakeholders to begin assessing risks right from the moment they are identified. Taking a more holistic approach we can consider analysis techniques that different stakeholders will find useful.
Qualitative Risk Analysis
Simple, non quantitative analysis is extremely useful here and there are many models you can use to illustrate, sort and define the organisational risks. We already started a basic qualitative review simply by creating a list of the risks and doing that macro assessment - "what do risks look like here ?". Sorting those risks by type, department, region or sorting them by 'degree of scariness' will all give you that quick qualitative view of risks. Comparing new risks with mitigated risks or closed risks also gives insight into the changing face of the organisation. But there are other qualitiative assessment tools and I've repeated a few useful models below with some links to more detailed sources and credits.
A useful overview of risk factor analysis - http://www.lanl.gov/orgs/d/d5/documents/risk-fact.pdf
A general overview of qualitative assessment - http://www.anticlue.net/archives/000817.htm
This is a big document and don't be put off by the German preface..! It's also very detailed but it does show how qualitative data can be used with quantitative data to give a better overall view of risk... http://www.bundesbank.de/download/bankenaufsicht/dkp/201109dkp_b_.pdf and that takes us nicely to the next point...
Quantitative Risk Analysis
The non-quantative models provide a valuable tool for analysing risks and forming mitigation strategies but they do little to inform the business of the financial implications. It is useful at this stage to think of quantitative analysis tools as falling into two camps: -
Deterministic - this type of analysis considers a range of outcomes using a limited number of scenarios such as 'Best Case', 'Worst Case' where, for example financial losses are single point in nature and estimated at the low and high end of the expected range.
Stochastic - these models rely on simulation of the risks and use variables within a range of possible inputs and outputs to create a probability distribution that can be immensely informative and powerful in predicting losses.
Right, it's at this point we need to sound an ALERT ! STOP! TAKE A DEEP BREATH AND READ THE FOLLOWING!
It's right about now that when words like 'stochastic' and 'probability distribution' pop up that readers will fall into two camps: -
The analysts and quants who work with this stuff every day and will read on to see if they agree with what I'm about to say
Everybody else who starts nodding off and clicks the back button to look for another article. But bear with me, for two reasons - because this stuff is easy (although it doesn't sound it) and because you're 90% through this article already. If you quite now it will be like walking out of the theatre just as Indie and Marion were witnessing the opening of the Ark! Read on...
Deterministic models - lets dispense with that title for a start and call them "single outcome" models. This is a crucial point. Each time you run a single outcome model you are estimating just that - a single version of the future based on a single set of inputs. If you want to get to our a point where you want to estimate your best case / worst case outcomes for a budget for example then you will need to run two models. Your best case scenario will have low probability rates, low expected losses and perhaps a smaller set of realised risk. Now that sounds fine for a simple best / worst forecast, but what if you need a base case model as well. Oh, and a scenario based on alternative economic data? You can see the number of scenarios you will need to build and maintain becomes unmanageable in the real world. You also get many, many outputs and ranges of risk loss that become confusing - which one do you use as the true base case? These single outcome models are useful for getting a sense of scale and testing sensitivities but have limited use in understanding true future impact.
So we know that single risks can be quantified within a range of impacts and it is possible to create a holistic model of all risks using those high and low ranges. But how do you acknowledge that a whole range of risks have a whole range of possible outcomes? For example, not every risk will be realised as a future event and a loss and therefore holding reserves for all losses is a huge drain on capital. There must therefore be a 'middle ground' where we can operate within a reasonable range of future risk expectations and this is where stochastic models become essential to managing the financial landscape of the organisation. Lets again abandon the title and call stochastic models 'simulations'.
These simulations allow you to run many single outcome models at once by assigning a range to each variable and then producing an outcome for each. Taking this at the individual risk level you might have impact of £10, £20 and £30 and 'hitting the run button' will produce all three risk outcomes for you. But of course, your risk register doesn't have one risk, it has many. So taking that example a little further, if we had 3 risks with 3 impacts, 'hitting the run button' now produces 9 possible outcomes. A step further, we don't really have 3 discrete impacts for risks, we say the impact will be within a range, lets say £1 intervals between £10 and £30. Hitting the run button now would produce 30 scenarios for each risk and we have 3 risks and we therefore have 30possible outcomes for just 3 risks. Clearly you couldn't run that many single outcome models and take an average of the losses. What then if you have 100 risks? Each time you run a single simulation, you get one possible version of the future. Running 10,000 simulations gives you 10,000 possible versions of the future. This is the where the real strength of simulation modelling comes into its own and by harnessing the concept of probability distribution we start to get some real confidence around our forecasts.
Taking our theoretical 10,000 simulations, lets say that at the lowest end of the spectrum the model says no risks materialise and no losses are incurred. At the other end of the spectrum every risk materialises and with the maximum possible financial loss. Clearly, the chance of no risks materialising at all is very slim, as is the chance of all risks materialising with maximum losses. You could take an average of the lowest loss (£ zero) and highest loss (£ lots) and use that as the forecast, but we are back where we started at our simple best case / worst case scenario and we have totally ignored the 9998 other outcomes in between.
Probability distribution is therefore interested in where these other 9998 outcomes sit between the high and the low. Are they evenly distributed, are the skewed towards the 'riskier' end of the spectrum? The diagram below borrowed from vertex42.com illustrates how 5000 simulations might be distributed over the range.
I've purposely talked in rather abstract terms so far without actually giving you an example of a simulation model. and it was at this point I was going to go into a very detailed description of using probability distributions and Monte Carlo analysis. However, Vertex42 have provided such an excellent introduction to Monte Carlo analysis I think I'll pass the baton to them and refer you to this article. It provides a more detailed view of everything I have discussed above and they've also provided a downloadable Excel example of a simple simulation model.
http://www.vertex42.com/ExcelArticles/mc/MonteCarloSimulation.html
If you went on the read the article then you now know the difference between skewness and kurtosis! If not, I've summarised the final element of using a simulation model below, and that is actually getting a forecast.
A summary so far: -
We've created and validated our risk log.
We applied some impacts and probabilities and used mitigation actions to reduce those as far as we can.
We've checked the mitigation actions have been carried out!
We've constructed a simple simulation model around those risks which has given us 10,000 simulated total losses
What next? We need to decide what level of losses we feel we will need to plan for and for that we use probability distribution,average, mean and standard deviation.
A quick revision of the terms:
The median is the 'middle' of a group of numbers - The median of 1, 4, 10, 500, 1000 is 10
The average or the arithmetic mean is a single number that typifies the set. The mean is calculated by adding up the numbers in a sample and dividing that answer by the sample size. The mean of 1, 4, 10, 500, 1000 is 303
In this example the difference between the median (10) and the mean (303) is very illustrative of how skewed the sample is towards the higher end.
Standard deviation is an indication of how 'spread out' the numbers are in a set. (I've never found it necessary to go into the maths of SD but a great little article is found here on a children’s maths site if you really feel the need! Personally, I use STDev function in Excel.). Using Standard Deviation you end up with a value representing 'spreadness' ( I know that's not a word but I think it should be..) - that is to say the bigger the SD the bigger the 'gap' between the numbers. This then gives us a standard basis with which to compare and model different sets of data and to draw accurate boundaries within it.
So, if we need to be '95% sure' of our forecast we can use SD to do that. If we want to be '99% sure' of our forecast we can do that to. This is because: -
One standard deviation away from the mean covers around is around 68% of the results in this group;
Two standard deviations away from the mean covers 95% of the results.
Three standard deviations covers around 99% of the results.
Another borrowed diagram illustrates the point perfectly...
Using our theoretical 10,000 simulations we can therefore calculate the Mean, calculate the standard deviation of the group and then simply add the deviation to the mean once, twice or three times to find out what losses we need to cover in order to meet whatever degree of 'certainty' you require. Using the diagram above with total losses plotted along the x-axis and taking the furthermost point up that axis that matches the level of certainty - the upper limit of the right green band representing the 95% confidence limit.
For non-analysts that's as far as you probably want to go in terms of calculating risk outcomes. For the real maths masochists among you we should then consider risk correlation and standard error, but I'll cover those topics in another article.
We've just covered a huge amount of ground in one relatively short article so its time for a quick summary of what we have covered so far...
Summary of the points we've discussed
Think about your stakeholders and what they will want out of the process. Particularly invest some time in understanding not only their roles but their personality types in terms of how detail conscious they are, how bought into the process they are and what they will want out of the risk management process;
Time spent creating a common language is well spent - get some parity across the organisation. What is risk? Boundaries and ranges of impact and probability. Use real organisational examples of risk impacts to form a common ground;
Challenge risk scores and feedback both individually and as a group to embed that common language and level playing field;
Mitigation scores are important - they are only relevant if the mitigation actions have been taken and continue to be valid;
Take time to understand the risk landscape that the register is showing you. The qualitative assessment is just as important as the quantitative and can yield a valuable insight into the organisation, department, managers and external environment;
If you hand off the quantitative analytics to someone else, make sure you see the end result and spend a little time with those analysts and begin to understand how the risks get rolled into financial statements and other reporting.
I've left the real challenge of Op Risk management right to the end. This isn't a one off process, more than likely your organisation will have a monthly process for updating and monitoring risks and the real challenge is to keep the process fresh for yourself and for your contributors. It is easy to let the process fall into a routine of form filling and scoring and that is the recipe for complacency and a completely meaningless list of risks. Some ideas for keeping the risk process alive: -
Focus on different risk themes every month and revisit individual department risks that match that theme;
Invite your contributors to give presentations on their own approach to risk management in their own parts of the organisation;
Ask guests to give their views on risk assessment. These could be senior managers within the organisation that have a part to play in dealing with the output of the risk register;
Get a bi-annual overview of the analytical output from the process and share it with the contributors;
Review best practice from other organisations.
There you go, some thoughts on the Op Risk process.
Pit Falls, Villains, Treasures and Risks all the way..!
So whether you are an Indie or a Marion, go brave the challenges and conquer !