The Factor Analysis of Information Risk (FAIR) framework was developed by Jack Jones. FAIR is a risk management framework championed by the open group that enables organizations to analyze, measure, and understand risk. The FAIR model evaluates factors that contribute to IT risk and how they impact each other while breaking down risk by identifying and defining the risk model. FAIR is most often used to establish probabilities for the frequency and magnitude of data loss.
The framework is complementary to information security programs as well as existing risk analysis processes and helps to strengthen an overall analysis. FAIR functions best when an organization understands what risk is. Once risk is understood, FAIR discusses operational risk concepts for better analysis and probability of the risk related to cybersecurity.
The Main Concepts of FAIR
The old saying goes “You can’t manage what you can’t measure”. FAIR promotes the concept that risk is uncertain and organizations should instead focus on how probable it is that an event will occur before moving to decision making. Probability is the key when it comes to FAIR and the analysis is deeply based on understanding the factors and the probability of asset loss. In order to measure risk via risk scenarios, there are several steps that need to happen.
- Definition: Define the risks.
- Measurement: Measure the risks.
- Comparison: Compare the risks to past, present, and future risks.
- Informed Decision: Make informed decisions about risks.
- More Effective Management: Management of risks is more effective because of the previous steps.
The FAIR approach to Risk Taxonomy and Technical Standard are laid out as follows:
Risk: The probable frequency and magnitude of future loss:
- Loss Event Frequency — time-based occurrence of loss due to threat against an asset
- Threat Event Frequency — time-based occurrence of a threat or action against an asset
- Contact — time-based occurrence of a threat that may contact an asset
- Random — it just “happens”
- Regular — actions of the threat agent are predictable
- Intentional — targeted threat seeks a specific target
- Action — probability that a threat will act against an asset
- Value — the perceived value of the asset to the threat
- Level of Effort — the threat level of effort to accomplish an act
- Risk — if the threat acts, what is the risk to the attacker
- Contact — time-based occurrence of a threat that may contact an asset
- Vulnerability — probability that an asset will be unable to resist a threat actor
- Control Strength — how strong are the controls protecting an asset
- Threat Capability — how much damage can a threat do to an asset
- Threat Event Frequency — time-based occurrence of a threat or action against an asset
- Probable Loss Magnitude — the likely outcome of a successful threat
- Primary Loss Factors — what can be lost from an asset
- Asset Loss Factors — value and liability of an asset
- Value — what is an asset worth
- Criticality — productivity impact
- Sensitivity — the potential harm of a threat
- Embarrassment
- Competitive Advantage
- Legal/Regulatory
- General
- Cost — the value of an asset
- Volume — how much of an asset
- Value — what is an asset worth
- Threat Loss Factors — What factors can influence the loss of an asset by a threat
- Competence
- Action
- Access
- Misuse
- Disclose
- Modify
- Den Access
- Internal versus External
- Asset Loss Factors — value and liability of an asset
- Secondary Loss Factors — What other factors can influence the loss of an asset
- Organizational Loss Factors
- Timing
- Due Diligence
- Response
- Containment
- Remediation
- Recovery
- Detection
- External Loss Factors
- Detection
- Legal & Regulatory
- Competitors
- Media
- Stakeholders
- Organizational Loss Factors
- Primary Loss Factors — what can be lost from an asset
FAIR Basic Risk Assessment Methodology:
- Stage 1: Identify scenario components
- Identify the asset at risk
- Identify the threat community under consideration
- Stage 2: Evaluate Loss Event Frequency (LEF):
- Estimate the probable Threat Event Frequency (TEF)
- Estimate the Threat Capability (TCAP)
- Estimate Control Strength (CS)
- Derive Vulnerability (Vuln)
- Derive Loss Event Frequency (LEF)
- Stage 3: Evaluate Probable Loss Magnitude (PLM)
- Estimate worst-case loss
- Estimate Probable Loss Magnitude (PLM)
- Stage 4: Derive and articulate risk:
- Derive and articulate risk
The Pros and Cons of the FAIR Framework
Why FAIR makes sense:
- FAIR plugs in and enhances existing risk management frameworks.
- FAIR has a solid taxonomy and technology standard.
- FAIR leverages analytics to determine risk and risk rating.
- FAIR allows for organizations to make decisions when hard information is lacking based on the formula Risk = (Threat * Vulnerability) / Controls.
Why some may not see the benefit:
- FAIR is not a methodology for performing organizational or individual risk assessments.
- FAIR does a great deal of “estimating” which can be perceived as guessing.
- FAIR requires a tightly defined taxonomy to function.
- FAIR technical standards describe risk and relationships but lack real measurements and assessment methodologies.
A risk management program is critical to an organization’s information security strategy. The FAIR model has been shown to augment and strengthen existing risk management initiatives, especially ones that focus on analytics. Like most models, FAIR is far from perfect. Some find the reliance on estimates and lack of metrics difficult to justify.
Others look at the model as a great way of building on top of an existing risk management framework with the goal of enhancing analysis and having a common taxonomy. The Open Group continues to champion the use of the FAIR framework for foundational risk analyses. Chief Information Security Officers (CISOs) are continually challenged to provide risk mitigation strategies that board members are able to understand. Overall, FAIR is a good way of communicating clear taxonomy and technology standards to board level executives while providing important risk mitigation strategies.