top of page
  • Julian Talbot

What's right with risk matrices?

For good reason, risk matrices appear in risk management guides and reference books. Despite much criticism, they are used by many organizations, and in the right hands, they are a practical and easy-to-use tool that can help most organizations in most circumstances to:

  • promote robust discussion (sometimes more useful than the actual rating)

  • provide some consistency in prioritizing risks [1]

  • help keep participants in a facilitated risk workshop on track

  • focus decision-makers on the highest priority risks

  • present complex risk data in a concise visual fashion (e.g., bubble charts)

They can be very effective for getting timely results in a facilitated risk workshop and presenting data.

Limitations of risk matrices

Risk matrices have many limitations and are not a panacea for all ills. In the hands of the inexperienced, the biased, or individuals with an agenda, they can generate misleading ratings. In his article “What’s Wrong with Risk Matrices?” Tony Cox [i] suggests that they have the following limitations:

  1. They can correctly and unambiguously compare only a small fraction of randomly selected pairs of hazards and can assign identical ratings to quantitatively different risks.

  2. They can mistakenly assign higher qualitative ratings to quantitatively smaller risks to the point where, with risks that have negatively correlated frequencies and severities, they can lead to worse-than-random decisions.

  3. They can result in suboptimal resource allocation as effective allocation of resources to risk treatments cannot be based on the categories provided by risk matrices.

  4. Categorizations of severity cannot be made objectively for uncertain consequences. Assessment of likelihood and consequence and resulting risk ratings require subjective interpretation, and different users may obtain opposite ratings of the same quantitative risks.

To this list, I could also add that risk matrices:

  1. Don’t include any assessment of timeframes. The risk of an incident in the next two weeks might be very different compared to the next ten years.

  2. Can oversimplify the complexity or volatility, as some risks are relatively static over time while others can change for better or worse almost overnight.

Limitations of the limitations

Of course, all of these points are true, but they omit to mention the following fundamental issues:

  1. no tool can consistently correctly and unambiguously compare more than a small fraction of randomly selected pairs of hazards

  2. ​any risk assessment tool can assign identical ratings to quantitatively different risks

  3. prioritizing the allocation of resources is not the role of the risk matrix – responsibility for the selection of risk treatments belongs to the risk manager [2]

  4. risk matrices are still one of the best practical tools that we have

  5. “the use of risk matrices is too widespread (and convenient) to make cessation of use an attractive option” [ii]

  6. risk matrices are designed to provide qualitative or semi-quantitative ordinal information (relative priority), not mathematically precise data

  7. if a risk is in the ‘High’ or the ‘Top 10’ list, it requires attention, and whether it is third or fourth on the list is not likely to be significant

  8. the inherent limitations of decision-making under uncertainty, the nature of political decision-making and the fundamental processes of human risk perception mean that subjective decision-making will always be a part of the risk assessment process no matter what tool is used

  9. risk matrices are a tool that supports risk-informed decisions, not a tool for making decisions

  10. last but not least, most of the flaws listed above only exist if risk matrices are used in isolation, which is rarely the case

Overcoming the limitations

The last point above is the most significant of all. If you use a risk matrix in conjunction with at least the following tools, they can be effective in supporting quality decision-making:

  1. A well-defined risk statement

  2. Robust likelihood and consequence definitions

  3. A hierarchy of controls to prioritize risk treatments

  4. Expected monetary value (EMV) or equivalent cost/benefit of risk treatments

The first two items on this list are the most critical. Precise risk statements and definitions for likelihood and consequence support consistent ratings. If these items are well-defined, you will likely achieve similar, if not identical, rankings from independent teams. If not, consensus will be unlikely.

It is also important to have a process for considering all risks and risk treatments collectively. Each treatment is likely to mitigate several risks, albeit to differing degrees. The optimal allocation of resources is likely to involve a complex decision-making process. The last two tools on the above list are not specific to risk matrices as they are about prioritizing risk treatments. A hierarchy of controls [3] helps selection of effective controls. It does not, consider cost/benefit. This is a separate, although linked, process. [iii]

Many risk matrices have inadequate likelihood and consequence definitions, and even more commonly, users attempt to use them to assess poorly defined risks. Without these two things in place, a risk matrix will provide meaningless, if any, information.

Defining risks

CASE is the best tool I know for defining a risk statement. To articulate a risk, you need to consider at least the following four characteristics:

C onsequence – what is the impact of this risk on your objectives?

A sset – what asset(s) are at risk?

S ource – what are the hazards or threat actors behind this risk?

E vent – what particular type of incident is being considered?

Why do you need these four items to define a risk statement? Consider risks of “terrorism,” “climate change,” or “compromise of sensitive information.” Each of these has many risks. There is no single risk of "climate change." It is very difficult, if not impossible, to analyze and rate these risks if we only have the event and the asset. It is virtually impossible to achieve consensus on such risks. Everyone has a unique context and individual perception of what ‘terrorism’ or ‘climate change’ means.

Consider the risk of "compromise of sensitive information." The consequences and likelihood are very different if an organization is a victim of:

• industrial espionage by competitors

• theft by criminals seeking to sell it back to you

• espionage by foreign intelligence services

• hacking

• computer user security access errors

• theft of a briefcase from a car by petty criminals

• staff inadvertently releasing the information to the corporate website

• premature distribution of a media release

• accidental emailing of a sensitive document to the wrong party

Not only would consequences and likelihood vary and change the risk, but the countermeasures would be very different. Consider the same risks revised to include CASE:

  • Compromise of sensitive information (Asset) due to untrained staff (Source) inadvertently posting incorrect files (Event) to the corporate website, resulting in a competitive disadvantage, reputation damage, and financial loss (Consequence).

  • Financial loss (Consequence) due to espionage (Event) by competitors (Source) seeking sensitive information (Asset).

  • Failure to protect information (Asset) from theft (Event) by opportunistic criminal (Source) elements while in transit leads to potential compromise (Consequence) of sensitive information.

These are very different risks and are much easier to assess consistently using a simple risk matrix. More importantly, they are much easier to prioritize and develop effective countermeasures.

Likelihood and consequence definitions

Human beings have great difficulty in making accurate judgments under uncertainty. [iv] Our ability to select from a range of likelihoods and consequence ratings is moderate at best. Additionally, we may not have a lot of good choices, so the art and science of making good risk decisions is an ongoing challenge.

Often, we must make difficult decisions under conditions of extreme uncertainty. Hard statistical data is usually lacking, and even when we have quantitative data, limited in application. Insurance companies can tell you how many houses will likely burn each year in your city. But they can’t tell you how likely it is that your house will be one of them. Mandating smoke detectors can make ten years of statistical information meaningless overnight. Tools like Monte Carlo modeling help us see our risk exposures if we have enough data, but they still need subjective interpretation.

So, what can we do to help ourselves make better decisions under conditions of uncertainty? First, we need a consistent approach to compare apples to apples. Risk matrices can be invaluable and practical decision-support tools in this respect. But only if we choose descriptors such that individuals working in isolation generate similar risk ratings.

Consequence ratings

When describing a risk, you must determine the type of consequences you want to consider. And the range of consequences that might occur if the risk eventuates. Not the worst-case consequence, but the worst credible scenario. For example, ‘slips, trips, and falls’ can cause death if someone lands on a sharp stick or hits their head on concrete. But it is not a particularly credible outcome. The most credible outcome is that a person will get up with bruises, a graze or even a sprain. Death is possible but rare. If appropriate to the circumstances, an organization might like to consider two or more risks. For example:

  • Minor injury to staff member slipping on water in the kitchen due to a leaking pipe.

  • Major injury to a staff member slipping on water in the kitchen due to a leaking pipe.

  • Death of a staff member slipping on water in the kitchen due to a leaking pipe.

The likelihood and risk ratings of these are so different as to make them different risks. These are not the only consequences of slipping, of course. Others might include lost time, capability impacts, financial costs, and reputation damage. It’s important to consider any downstream impacts of a particular risk that would exceed the immediately obvious.

For example, when rating “Minor injury as a result of a staff member slipping on water in the kitchen due to a leaking pipe,” the consequence is likely INSIGNIFICANT regarding the People, Economic, and Capability consequences in Table 1. It would not be rated at all in terms of property or information. At the same time, depending on the context, it might rate as NEGLIGIBLE in terms of Reputation. Therefore, the overall risk rating would be based on the higher of these two (Negligible).

Table 1: Example risk consequence descriptors

Likelihood ratings

Likelihood can be framed in quantitative, semi-quantitative or qualitative ways. Where we don’t have sufficient data for quantitative analysis and would like something more granular than ‘likely’ or ‘unlikely,’ risk matrices are well suited for semi-quantitative analysis. However, it is important to understand that semi-quantitative is only for ordinal ranking. The numbers themselves have no mathematical meaning other than to suggest relative rankings and perhaps relative priorities.

There are many ways of representing likelihood; however, in the example below, I have elected to use the following terms:

  • Chance: a qualitative assessment of likelihood. Often simply an ordinal ranking ranging from very low to very high.

  • Probability: a statistical or actuarial assessment of likelihood. Usually ranging from 0 to 1 and based on mathematical or statistical data.

  • Frequency: the rate at which something occurs or is repeated over a given sample. Perhaps, for example, the number of times per year something happens.

No two individuals will share the same understanding of the words used to provide ordinal rankings. It is therefore essential to shape their perception by using explicit descriptors. The table below provides one possible example, but each context requires specific descriptors.

Table 2: Example risk likelihood descriptors

Frequency is another way to express probability data. Gerd Gigerenzer [v] offers many examples of educated professionals unable to interpret probabilities. He goes on to show that the most effective way for people to understand likelihood is to state it as a frequency. In a 1998 study of medical professionals, the majority failed to answer the following question:

"About 0.01 percent of men with no known risk behavior are infected with HIV. If such a man has the virus, there is a 99.99 percent chance that the test result will be positive. If a man is not infected, there is a 99.99 percent chance that the test result will be negative. What is the chance that a man with no known risk behavior who tests positive actually has the virus?"

Most of the professionals and most people think that it is 99.99 percent or higher. Now consider the same question worded using natural frequencies:

"Imagine 10,000 men who are not in any known risk category. One is infected and will test positive with practical certainty. Of the 9,999 men who are not infected, one will test positive. So we can expect that two men will test positive."

You can see the odds more clearly by presenting the data in natural frequencies. They are roughly 1 in 2 (50%) that someone from a low-risk category with a positive test result is HIV positive.

Table 2 provides options for selecting the optimal expression of likelihood options. Natural frequencies will usually be most meaningful and, therefore, likely to deliver the optimal results.

Using a risk matrix

It is important to remember the purpose of a risk matrix. We usually don't need a precise calculation of the risk. Nor to determine the potential impact on objectives in detail. Indeed, any such attempts are rarely useful. When we use a risk matrix, we are trying to assess, in essence, to prioritize a basket of risks. Where there are too many risks for us to give them all the same level of attention, we need to aggregate them into groups or select the most significant. Then, we can focus on those requiring urgent management, deal with less important risks, and monitor the rest. Color-coding or categorization reflects this broad classification of risks into high-medium-low priority.

In some cases, it may be enough merely to rank risks against each other to determine relative prioritization. All ‘red’ risks should be treated as a high priority, and we may not need to worry about whether some are redder than others.

For illustration, I’ve used a 5×5 risk matrix with five levels of risk (Very Low, Low, Medium, High, and Very High) in Figure 1. Caveat: no specific level of granularity is better than any other. As long as a matrix has enough granularity for the purpose to which it is being applied, it has the right number of squares. A 2×2 matrix may be suitable for comparing three risks, or you may use a 4×8 matrix to compare 25 risks in your organization. The numbers in the squares in the risk matrix (2 to 10) are optional. They are there to provide some level of granularity within specific risk ratings. In this instance, the likelihood and consequence have simply been summed to create ordinal priorities. Multiplying them would also deliver the same functional results when prioritizing risks.

Figure 1 also illustrates the complexities of assessing a relatively minor risk and the perils of inadequate data. It considers the risk of “Minor injury due to a staff member slipping on water in the kitchen due to a leaking pipe”. It illustrates the importance of considering historical data and downstream impacts. Even with a well-defined risk, the likelihood and consequence may not be as they appear.

In this hypothetical but realistic example, the risk assessors:

  1. Initially downplayed the likelihood as ‘Unlikely’ (‘Could occur at some time’ or ‘<35%’). People often assess the probability of an event by the ease with which instances or occurrences can be brought to mind [vI]. After reviewing historical incident reports, they realized it had occurred more than seven times in the past ten years and was ‘Possible’ [4]

  2. Initially considered only the consequence of ‘minor injury’, which gave a rating of ‘Insignificant.’ After considering downstream impacts (‘scrutiny by internal committee’), it was upgraded to ‘Negligible.'

Though hypothetical, the initial ratings are a credible outcome. People downplay risks that are pedestrian, common, and familiar to them or well understood. We also exaggerate risks that are spectacular, personified, or highly publicized. [vII] [vIII] [Ix]

Figure 1: Example Risk Matrix

As illustrated in Figure 1, using the likelihood and consequence descriptors resulted in the risk being upgraded from ‘Very Low’ to ‘Low.' This is minor in itself and doesn’t change the risk fundamentals. It can, however, change the management attention this risk receives when it is prioritized among other risks. It is also a good example of the folly of relying solely on risk matrices to make resource allocation decisions. Although it is a relatively low risk, the cost of having a plumber repair the leak is likely insignificant.

Three points are important to clarify at this juncture:

  1. Using the highest credible likelihood and consequence rankings from each category is useful.

  2. This example is hypothetical. Other risks will have their unique characteristics, which may provide very different ratings.

  3. A risk matrix that considers only one category of consequence and/or only one likelihood estimation will likely be of limited value. It is also likely to yield inconsistent results because it will attempt to measure all risks against one category.

Using risk matrices to present data

If nothing else, the humble risk matrix is one of the most effective tools to convey risk information to an audience quickly.

Figure 2: Risk Bubble Chart (Size of the bubble indicates control effectiveness)

Most people will be familiar with the use of bubble charts, but a simple example is presented in Figure 2 to illustrate their use.

Figure 3: Example of risk matrix used to present complex data

Example of a complex risk matrix showing 16 types of information

In Figure 3, at least 15 pieces of information are conveyed regarding an organization's risks using a risk matrix bubble chart, including:

  1. Current risk rating (Position ‘C’ in green on matrix)

  2. Inherent risk rating if no controls were in place (Position ‘A’ in red)

  3. Past risk ratings (Position ‘B’ in yellow)

  4. Changes in risk ratings over time (Delta between positions ‘B’ & ‘C’)

  5. Expected residual risk after implementation of treatments (Position ‘D’ in pale green)

  6. Likelihood (Vertical positions on matrix)

  7. Consequence (Horizontal position on matrix)

  8. Timeframe of assessment (Title)

  9. Rough order of magnitude comparative costs of current spend on risk treatments (number of ‘$’ symbols on arrows between ‘A’ & ‘B’ and ‘B’ & ‘C’)

  10. Comparative benefit and costs of proposed risk treatments (Delta – expressed by the length of arrows – and number of ‘$’ between risk positions)

  11. Volatility. i.e., Whether a risk is relatively stable over time or can change rapidly with little prior warning (shape of the symbol)

  12. Level of confidence in the quality of the risk rating (size of the symbol)

  13. Whether or not the risk has occurred in this organization in the past (Risk number in plain text or Bold Italic)

  14. Comparative priority of one risk to another (position on matrix)

  15. Level of management intervention and responsibility required to address the risk (Colour of the grid square in which the risk is located)

This is just a sample of the way in which risk information can be presented using a risk matrix. The only limitation regarding the amount of information that can be transmitted is your imagination.


Risk matrices are useful for fast, effective, practical risk assessment processes. But they should not be used in isolation. Any assumptions or embedded judgments need to be explicitly articulated. In particular, (a) the risks, as well as (b) the likelihood and consequence descriptors must be well-defined.

Risk matrices are not suited for every circumstance and have inherent limitations. But they also have a place in the toolbox of every risk manager who wants to:

  • provide consistency and granularity to risk prioritization

  • encourage and facilitate robust discussion

  • provide a point of focus when assessing risks

  • present complex data concisely


If you want to build your own risk matrix, you can do so in SECTARA and download it into a Word document. Some friends and I have built an awesome risk assessment platform to align with the ISO31000 risk management process. It builds a risk register and treatment plan to export to MS Word for editing. It also has a cool control effectiveness rating module, threat assessment, hazard ranking, and asset criticality rating systems.



[1] This benefit can be perilous if you allow the rating to dominate common sense. In the hands of a skilled facilitator, however, risk matrices are one of the fastest and most consistent ways to prioritize risks.

[2] Risk treatments typically treat more than one risk in a basket of risks (risk register). Thus, risk treatment selection should not be based solely on the priority of risks or on their effectiveness against one single risk.

[3] ESIEAP stands for Elimination, Substitution, Isolation, Engineering, Administrative controls, and Protective measures. It is a decision-making tool for evaluating which bundle of risk treatments would be most effective. For example, the best way to mitigate the risk of Malaria on holiday is to eliminate the risk by not traveling. A second best option if you still want to travel for a holiday is to substitute another location with no malaria.

If you must visit a malarial region, the third best option is to isolate yourself in parts of the country where it is not prevalent. And so on, with engineering controls such as flyscreens and bed nets being less effective. Administrative controls such as not going out at dawn or dusk are less effective. The least effective measures include protecting yourself with long sleeves and insect repellent.

[4] In the absence of historical data, the equivalent experiences of similar organizations would be likely to produce a similar result. Even without pure data, the overall process would still yield valuable discussion. And at the very least, an assessment of risk would offer some prioritization, however inexact. It might also offer a benchmark for comparison against any incidents or future data.



[I] Cox, L.A. (2008), ‘What’s Wrong with Risk Matrices?’, Risk Analysis, Vol. 28, No. 2, DOI: 10.1111/j.1539-6924.2008.01030.x

[II] Cox, L.A. (2008), ‘What’s Wrong with Risk Matrices?’, Risk Analysis, Vol. 28, No. 2, DOI: 10.1111/j.1539-6924.2008.01030.x

[III] Talbot, J. & Jakeman, M. (2009), Security Risk Management Body of Knowledge, Wiley Interscience, NY, USA.

[Iv] Plous, S. (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, NY, USA.

[v] Gigerenzer, G. (2002), Calculated Risks, Simon & Schuster, NY, USA.

[vI] Tversky, A. and Kahneman, D. (1974), ‘Judgment under Uncertainty: Heuristics and Biases,’ Science, 1974, 185:1124–1130

[vII] Glassner, B. (1999), The Culture of Fear: Why Americans are Afraid of the Wrong Things, Basic Books, NY, USA.

[vIII] Slovic, P. (2000), The Perception of Risk, Earthscan Publications Ltd, London, UK.

[IX] Kluger, J. (2006), “How Americans Are Living Dangerously,” Time, 26 Nov 2006, NY, USA



As always, you can download the graphics by right-clicking them. You can also download the spreadsheet from my Downloads area if you want to use it as a template to create your own.



I like risk matrices and use them relatively often. But not for all risk assessments. And when I do use them, I do so with caution. Please re-read the section called 'Limitations of risk matrices' before you go off tilting at windmills armed only with your trusty risk matrix and Sancho Panza at your side. You may find it is popular in certain circles to speak derogatively of risk matrices. The implication is that they are part of an outmoded "instrumentalist approach," perhaps accompanied by comments favoring risk culture or behavioral psychology. I'm a big fan of the psychology of risk and the importance, even primacy, of culture. But, as any pilot will tell you, aviation would be a lot less safe without instruments such as pre-flight checklists and emergency procedures. Instruments still have their place.


Parting Comments

This has been one of my most popular articles over the years, and I am in the process of expanding it into a short book. I'd welcome your suggestions or questions in the comments field. In particular, any areas you would like me to expand upon, add or remove entirely. Do you agree, disagree, or not understand anything about risk matrices.

For example, a few ideas that I'm working on adding to the article include:

  • Emphasizing the limitations of risk matrices

  • Considering negative and positive risk consequences

  • A combined positive/negative risk matrix example.

  • Examples and pros and cons of 2x2, 3x3, etc. Matrices

  • The process for using risk matrices to discuss and then communicate risks and treatments

  • The importance and implications of the timeframe

  • How to use risk matrices to communicate changes in risk over time

  • The relationship between risk matrices and probability distributions

  • How to discuss, compare, and contrast risks (e.g., Cancer vs. diabetes, GFC vs. War, etc.)

  • Consequence on the x or y axis? It counts, but not for reasons you might guess.

  • 10 consequences (5+Ve, 5-Ve) matrices

  • Using P10, P30, P50, P70, P90 to create a simple distribution

  • Communicating comparative risks

  • Examples of 3-axis matrices. E.g., Time or frequency (of a particular activity) on the Z-axis

  • Alternatives to likelihood & consequence (e.g., vulnerability/exposure, intent/capability)

  • Alternatives to risk matrices altogether

Last but not least, would a short video explaining the concepts be helpful?


bottom of page