top of page
  • Julian Talbot

Why security is different from other risks

I often encounter conversations or articles proposing that a particular risk assessment approach is universally superior to all others. These views seem to be firmly entrenched among some finance professionals and academics. Such confidence, however, is rare among security risk professionals, so I thought I might try to explain why one category of risk (security) is different.


While visiting a friend in Papua New Guinea some years ago, I noticed that one side of his house was freshly painted, inside and out. But only that wall. Noting my quizzical look, my friend related how he came to renovate only one side of his house.


The first time he was broken into, the local 'rascals' smashed a window to get in while he was at work. That was understandably annoying, so he put bars on all his windows. A few months later, he came home to find his front door in splinters after being jemmied open with a massive crowbar.


Undeterred, he repaired his front door and added security doors to all the entry points of his home. Accordingly, the locals adapted by bringing a chainsaw on their next visit to carve out a new door in the wall next to his driveway, resulting in the one-wall renovation project. Not long after this renovation, my friend admitted defeat and moved into a secure compound, but his story illustrates the essence of why security is different from some other risks.


Every risk analysis shares some common characteristics. They all start with the same data, but many fail to recognize that complex systems behave differently from equilibrium systems. In plain English, dynamic risks are different from stable, predictable risks. This is one of the core reasons why central bank and financial equilibrium models consistently produce poor forecasts and risk management results.


In particular, human factors need to be considered in every risk assessment. It's a radical idea, perhaps, but humans are the source of most risks. Even extreme weather events are influenced by human actions that change our climate. You might perhaps feel that some risks, such as earthquakes, do not involve human factors. If that were the case, earthquake risk assessments could safely ignore where we live, our construction choices, and how humans react in a crisis.


This is not to say that statistical or probabilistic models are not useful. In many fields, they are rightly the key factors in risk models. Insurance, flood modeling, and games of chance, to name just a few examples, are best understood via such approaches.


Security risk assessment fundamentally differs from other risk assessments because security threats are adversarial, adaptive, and asymmetric.

The Three As


Risk management comes in many forms, but one approach, which I call the 3As Risk Model, looks at three different risk management styles. Understanding which of the following environments you are analyzing is a core element that is missing from most risk models:

  1. Actuarial - the law of large numbers

  2. Active - the law of the land

  3. Adversarial - the law of the jungle

They can be mundane everyday risks or complex matters of international concern, as illustrated below.

3As (Actuarial Active Adversarial) Risk Model
Three Types of Risk

1. Actuarial Risks


Some risks are easily understood with statistical models, managed with standard procedures, reviewed based on probabilities, and adapted as required. Human discretion in responding to unusual events or system changes can also be specified. We typically have reasonable knowledge and information about this category of risks. They usually do so when risks change for understandable and often anticipatable reasons.


Some examples include workplace safety, insurance and actuarial analysis, engineering, manufacturing, and the operation of vehicles, aircraft, and machinery. There is no specific human counterparty, and these types of risks generally operate in a state of equilibrium. These risks can be modeled and studied using the law of large numbers.


At the most basic level, they can involve games such as blackjack or backgammon, where the probabilities can be calculated and optimized. More complex examples include insurance (actuarial) calculations and engineering risks, all the way through to complex threats such as climate change or space travel. With such risks, most elements at least can be calculated with high precision, subject only to the availability of sufficient information and computing power.


2. Active Risks


These risks require active management because they are actively evolving. Procedures, policies, checklists, etc., are essential, but operational human decision-making is the core element. Threat actors in this environment typically operate in a competitive and adaptive but (mostly) benign environment.


Examples include financial markets, project management, business enterprises, sports, and marketing. In this context, organizations and individuals face adaptive counterparties, some of whom are allies, while others seek to out-compete us. Their intention typically is to achieve their gain. These actions can, but will not necessarily, cause loss or benefit for the other parties.


Counterparty risk is defined as the potential risk that one party involved in a financial transaction or contractual agreement will default on its obligations. It arises from the possibility that the counterparty may be unable or unwilling to fulfill its contractual obligations, leading to financial losses or other adverse consequences for the other party involved.


In simpler terms, counterparty risk is the risk that the other party with whom you have a contract may fail to deliver on their promises, resulting in potential financial harm to you or your organization. This risk is particularly relevant in situations involving loans, derivatives, trade transactions, outsourcing, or any form of contractual agreement where the performance of the counterparty is critical to the success or stability of the arrangement.


Operational risks can range from healthy competition, such as a soccer match, to normal commercial activities, all the way up to complex legal strategies argued in a Supreme Court or international courts.


Risks in this category may be zero-sum or may benefit others.

  • An example of a zero-sum risk approach would be a stallholder at a farmers market who aggressively competes on price but does not increase the market size. Such an approach will increase the market share of that stallholder but at the expense of other stallholders.

  • An example of mutual benefit is a cafe owner who advertises to increase their business. It will probably help their own business and attract new customers to the area, benefitting other business owners.

This presence of adaptive counterparties means that risks are continually appearing and evolving. In this context, the primary constraint for risks is usually the boundary between legal and illegal activities where all parties prefer to remain within the law of the land.


3. Adversarial Risks


This is the realm of security risks. This type of risk involves an adversary who is actively seeking to harm the individual or organization. It can range from burglary to world wars and everything in between. Examples include terrorism, burglary, ransomware, war, fraud, geopolitical actions such as tariffs or sanctions, and petty crime. This risk category involves an adversary who can usually be identified, at least in terms of a group of actors, if not the specific individual.


While there may not be a formal contractual obligation with an adversary, their actions and responses can significantly impact the achievement of our objectives, much like a counterparty in a contractual relationship.


In the context of security risk management, an adversary can be seen as a dynamic and adaptive counterpart. They continuously evaluate and exploit vulnerabilities, adapting their tactics to overcome risk treatments and barriers put in place by an organization. This necessitates a similar level of adaptability on the part of the organization managing the risks.


Just as in a dance, both parties, the organization and the adversary engage in a constant interplay of actions and reactions. Risk treatments and defensive measures are adjusted based on the adversary's evolving tactics; conversely, the adversary modifies their strategies in response to the implemented risk treatments.


While the nature of adversaries may evolve and expand beyond human actors to include AI, for example, the fundamental principle of managing risks remains applicable. The use of advanced technologies can make adversaries less visible within systems, but their presence and potential impact persist. Whether it's a human adversary exploiting vulnerabilities or an AI system with malicious intent, the need to identify, assess, and respond to risks remains crucial.


Furthermore, the asymmetry of risk in this category is a significant concern. A single individual or a small group armed with technical expertise or advanced tools can cause substantial damage disproportionate to the size or visibility of their operation. This aspect underscores the challenges faced in managing such risks effectively.


In an ever-evolving threat landscape, red teaming plays a vital role in ensuring that organizations stay one step ahead of potential adversaries. Organizations can anticipate and reduce the likelihood and impact of attacks by regularly subjecting systems and defenses to rigorous testing and scenario modeling.


Last but not least, this source of risk lacks clear rules and regulations governing adversarial behavior. The dynamics of managing these types of risks resemble the law of the jungle, where the strong prey upon the weak. It highlights the need for a proactive, iterative approach to stay ahead of adversaries.


Conclusion


These three types of risk are not mutually exclusive, and many situations involve two or three of these types of risk. The perspective of this 3A model of managing risk is to understand which type of risk you are dealing with because they each require different strategies for risk management.


Security is different because the adversary is not only adapting to your risk mitigations but is actively seeking to harm you. Unlike most forms of risk, adversaries in a security risk context are not bound by ethical or legal constraints and are often motivated to overcome defenses. Many are relentless in their pursuit of finding new vulnerabilities and exploiting them.


After a failed assassination attempt on British Prime Minister Margaret Thatcher, an IRA spokesperson commented, "Today we were unlucky, but remember, we only have to be lucky once. You will have to be lucky always." The IRA perhaps overstates the adversary's advantage in a security context, but the simple truth remains that threat actors can, and almost certainly will, adapt to whatever we do. Our adversary has the luxury of choosing their attacks' time, place, and nature. While for us, every security risk treatment is also the seed of a new attack method.

 

This article is adapted from a chapter in my SRMBOK Guide to Security Risk Assessment, due for publication in the second half of 2023.


If you'd like to read more extracts from the book or would like to be notified when it is available, please consider subscribing to my newsletter.


bottom of page