The Four Dimensions of Vulnerability Prioritization
No one is happy with the current state of vulnerability management. It’s a scale problem—and there are just too many of them. At a global level, we’re continuing to discover more than 50 vulnerabilities per day. It’s not just our inability to cope, either. IT teams don’t have enough time or resources to patch more than a small subset of them. Even if they did, most organizations wouldn’t be able to tolerate the disruption to business operations needed to implement them all.
The biggest challenge in vulnerability management is vulnerability prioritization, or identifying those that pose the greatest risk to the business and fixing those first. To ensure better alignment between vulnerability and risk management, we must consider four critical dimensions: severity, exploitability, context and controls.
Understanding Vulnerability Severity and Exploitability
The first two considerations for vulnerability prioritization, severity and exploitability, are already well-established.
The availability of reliable data on severity dates back almost 20 years to the first release of the Common Vulnerability Scoring System (CVSS). The 0 – 10 (zero risk to critical) CVSS scoring mechanism is well established as a key dimension.
In more recent years, we’ve added the exploitability dimension. Not only do we prioritize according to the severity of a potential impact, nowadays we also take account of just how easy or difficult it is to exploit it, and increasingly how widely it’s being used in the wild.
The CVSS score itself has evolved to support the exploitability dimension. This has been further improved on with the launch of publicly available resources such as the Forum of Incident Response and Security Teams (FIRST) and recent work by Cyber Infrastructure and Security Agency (CISA).
FIRST’s new Exploitation Prediction Scoring System (EPSS) is driven by the assumption that only 2%-7% of published vulnerabilities are ever seen to be exploited in the wild. Initially presented at BlackHat in 2019, EPSS is a community-driven effort to combine descriptive information about vulnerabilities with evidence of actual exploitation in-the-wild. It seeks to improve vulnerability prioritization by estimating the likelihood that a vulnerability will be exploited.
Then in November 2021, CISA launched the Known Exploited Vulnerabilities (KEV) catalog. Created to drive the imposition of mandatory timescales for U.S federal agencies to apply priority fixes, the machine readable KEV catalogue is publicly available and is already seeing significant adoption in the private sector as a source of best practice.
As important as these are, severity and exploitability are both theoretical, external, universal dimensions of the problem. Neither takes any account of the internal business context that is a key element in determining the level of risk that a vulnerability poses.
For example, a vulnerability with a high severity or a high exploitability score could be present on a non-critical asset, such as dev machine with no access to sensitive datasets. Compare that to another machine with medium severity vulnerabilities, but which is playing a critical role in supporting a share-price affecting application.
The flaw with today’s two-dimensional approach is that it tends to prioritize the first when, actually, it’s the second that is more critical. In other words, the two-dimensional model is driving decision-making that is flawed from a risk management perspective.
The 4-D Vulnerability Prioritization Model
By combining two additional dimensions to technical severity and exploitability, organizations can do a better job of prioritizing their vulnerability management process.
The third dimension is the organization’s unique contextual risk—leveraging insight into its asset inventory to derive specific context around an asset. This shouldn’t just reflect the risk to the vulnerable asset itself, but also take full account of the all-important context of the mesh of relationships that it has with other assets. This includes other compute assets, networks and network infrastructure, sensitive datasets, software components, services and users.
Key considerations to understand context also include:
- How critical are the vulnerable assets?
- Do they support critical systems or tier-one applications?
- Do they have access to sensitive datasets?
- Are there high value individuals accessing and leveraging these assets?
- What is the blast radius associated with the asset?
A vulnerability embedded in an Internet-facing production machine represents a very different risk to one that’s embedded in a virtual server in a pre-production development environment, for example.
The fourth and final dimension is compensating security controls. Consider again two separate assets that represent the same level of risk to the organization in terms of the first three dimensions. Now consider a scenario where the first of the two is protected by three or four distinct layers of security controls while the second is only protected by one (or none). Effective, risk-based, prioritization demands that the organization prioritizes fixing the second vulnerability before it fixes the first.
A model that correlates inputs across all four of these dimensions represents a new way forward for IT and security teams to build a high-fidelity approach to vulnerability prioritization. It gives teams much higher confidence in building a continuous, iterative, approach to vulnerability management—one that hardens cyber security posture a lot more effectively than the legacy two-dimensional model.