Vulnerability Prioritisation in Practice: CVSS, EPSS, KEV and SSVC
- Christopher Clarkson
- Mar 10
- 11 min read
Episode 6 of the CAXA Technologies Security Operations Series
A vulnerability with a CVSS score of 9.8 and an EPSS score of 0.3% is telling you two different things, and one of them is more useful than the other.
Every episode in this series has made the same assertion: CVSS alone is an insufficient basis for prioritisation. Episodes 1, 2, and 3 each stated it with increasing specificity. What none of them did was deliver a working alternative. This episode closes that gap.
The CVSS failure mode is structural. The Base Score measures the intrinsic severity of a vulnerability’s technical properties, independent of any attacker or any specific environment, and that score does not change as the threat landscape evolves. A vulnerability published in 2019 with no public exploit carries the same score it did on publication day, regardless of whether it has attracted any exploitation interest since. When you consider that FIRST.org’s EPSS model estimates only around 5% of all published CVEs are ever observed to be exploited in the wild, and that on any given day 88% of CVEs in the EPSS dataset carry a 30-day exploitation probability below 10%, the consequence of CVSS-only triage becomes clear: the overwhelming majority of remediation capacity directed at “critical” findings is aimed at vulnerabilities that attackers are ignoring.
The mismatch is systematic. Remediation teams work through backlogs ordered by severity scores, consuming finite change windows on findings that are severe in theory but negligible in the actual threat environment. The small number of vulnerabilities where attackers have deployed active tools receive the same queue position as findings with no documented exploitation. This is not a resource problem. It is a signal problem.
Episode 1 established that the only sustainable response to the asymmetry between attack and defence is prioritisation rigour. This episode delivers the specific tools that rigour requires: a numeric model combining CVSS, the Exploit Prediction Scoring System (EPSS), and the CISA Known Exploited Vulnerabilities catalogue; and a decision-tree model from Carnegie Mellon’s Software Engineering Institute, known as SSVC. They answer the same question through different means, and understanding both is necessary for choosing between them.
The Numeric Model: CVSS, EPSS, and the KEV
CVSS measures the intrinsic severity of a vulnerability’s technical properties. The Base Score components (attack vector, attack complexity, privileges required, user interaction, and impact scope) describe the vulnerability mechanism, not the exploitation reality. The Base Score carries no information about whether exploitation tooling exists, whether the vulnerability has been incorporated into active attack campaigns, or whether your environment has any meaningful exposure to it. Its role in a mature programme is as a severity starting point and a shared language for describing technical impact, not a prioritisation signal in its own right. CVSS assigns the same score to a vulnerability being actively weaponised by ransomware groups as it does to one that has sat in the corpus for years with no public exploit and no recorded exploitation anywhere. Using it as the primary decision variable treats those two situations as equivalent, which they are not.
EPSS addresses a fundamentally different question. Where CVSS asks how severe a vulnerability is in principle, EPSS asks what is the probability it will be exploited in the next 30 days. The score is derived from observed attacker behaviour (threat intelligence feeds, published proof-of-concept code, dark web signals, honeypot data) and updated daily by FIRST.org. It does not measure severity or impact. It measures where attackers are pointing their tools. The FIRST.org dataset covers over 317,000 CVEs. Of those, 88% carry an exploitation probability below 10%, and only around 1.7% score above 50%. Most vulnerabilities cluster near zero, and crossing 20% places a finding in rare company. A vulnerability with CVSS 7.0 and EPSS 65% should typically outrank one with CVSS 9.5 and EPSS 0.8%, because EPSS reflects where attackers have deployed working exploitation capability. The higher CVSS score tells you only that the mechanism is more severe in the abstract.
The CISA Known Exploited Vulnerabilities catalogue is the most decisive of the three signals. Inclusion is not based on theoretical risk or modelled probability; it requires CISA to have confirmed real-world active exploitation with supporting evidence. The catalogue contains approximately 1,529 entries against a corpus of over 300,000 published CVEs. Around 0.5% of all CVEs have cleared this bar. When a finding appears in the KEV catalogue, the operational instruction is straightforward: emergency or urgent priority, regardless of CVSS score. There are two failure modes worth naming. The first is ignoring a KEV entry because its CVSS score looks moderate. A KEV entry at CVSS 6.5 represents confirmed active exploitation, which matters more than a CVSS 9.8 finding with no evidence of attacker interest. The second failure mode is operational: not automating KEV alerting, so that new additions surface only in a weekly report reviewed days after publication. Some high-profile vulnerabilities are added to the KEV within days of first confirmed exploitation. A weekly review cadence introduces exposure that is straightforward to avoid.
The combined decision hierarchy works as a simple sequence. Does the finding appear on the CISA KEV catalogue? If yes, the priority is Emergency or Urgent and no further deliberation is needed. If not, is EPSS elevated above approximately 20%? If yes, priority rises regardless of CVSS score. If neither condition applies, CVSS provides the severity band as a starting point, adjusted by asset context.
The three CVEs below show how this logic produces different outcomes from the same scanner output.
CVE-2022-37966 is a Microsoft Windows Kerberos elevation of privilege vulnerability with a CVSS Base Score of 8.1, an EPSS score of 1.38% (verified 2 March 2026), and no entry in the KEV catalogue. In a scanner report sorted by CVSS, it sits near the top of the queue with an authoritative-looking “High” classification. Under the numeric model, it stays in the routine prioritisation queue. The CVSS score reflects genuine severity in the vulnerability mechanism, but a score of 1.38% sits well below the 88th percentile threshold that FIRST.org identifies as the bulk of the published CVE corpus. The number commands attention; the threat reality does not support it.
CVE-2021-26855, the ProxyLogon Server-Side Request Forgery in Microsoft Exchange, carries a CVSS score of 9.8, an EPSS score of 94.31% (99.95th percentile), and appears prominently in the KEV catalogue. Under the numeric model it reaches Emergency through two independent signal paths; KEV membership alone would be sufficient. The only question this finding poses for a VM programme is an execution question: how quickly can the fix be applied. It is not a triage question.
CVE-2023-4966, the Citrix Bleed session token disclosure in NetScaler ADC and Gateway, has a CVSS score of 9.4, an EPSS score of 94.35% (99.96th percentile), and a KEV entry added in October 2023. All three signals are aligned. The vulnerability allows session token leakage that enables full session hijacking, bypassing authentication entirely. Any organisation using NetScaler for remote access was running a perimeter device with an open authentication bypass that attackers had already confirmed at scale.
CVE-2022-37966 stays in the routine queue. CVE-2021-26855 and CVE-2023-4966 both reach Emergency. The contrast is not the result of algorithmic complexity. It is three questions applied consistently.
The Decision-Tree Model: SSVC
SSVC (the Stakeholder-Specific Vulnerability Categorisation) was developed by CERT/CC at Carnegie Mellon University’s Software Engineering Institute. Its most significant difference from the numeric model is that SSVC produces no score at all. It produces a triage action directly. The four possible outcomes for a Deployer (the relevant stakeholder tree for VM practitioners) are Immediate, Out-of-Cycle, Scheduled, and Defer. Rather than ranking how alarming a vulnerability looks, the framework forces a decision about what to do with it. A CVSS 9.8 invites the organisation to rank the finding against other 9.8s. An SSVC outcome of Immediate requires it to act.
The Deployer tree works through four decision points in sequence. The first is Exploitation: does the vulnerability have None, a Public PoC, or Active exploitation? This is the most discriminating signal in the tree. Active exploitation forces the tree toward Immediate for any system with meaningful exposure and impact. A Public PoC raises urgency substantially without reaching that threshold; tooling exists and is accessible to attackers, but confirmed campaigns have not yet followed.
The second decision point is System Exposure: how accessible is the affected system to attackers? Open means reachable from the internet without authentication barriers. Controlled means accessible internally but not directly externally. Small means accessible only to a narrow, authenticated, internally-managed population. This is the context that CVSS never considers, and it is what makes the same CVE a different risk depending on where it sits in your estate.
The third is Automatable: can an attacker reliably exploit the vulnerability at internet scale without per-target customisation? A vulnerability that is automatable is accessible to low-sophistication actors with mass-scanning tooling. One that requires per-target reconnaissance limits the attacker’s scale, even if public exploit code exists.
The fourth is Human Impact: what would the combined safety and mission impact on this organisation be if the vulnerability were exploited? The four levels (Low, Medium, High, Very High) need to reflect what the affected system actually does. A Critical CVSS vulnerability carries Very High Human Impact on a core payment processing system and Low Human Impact on an isolated development server. This is where SSVC requires asset context, and where it connects directly to the Asset Management pillar.
The trade-off is real and worth being clear about. SSVC is harder to implement at scale than a matrix lookup. Completing the tree for a novel finding requires someone to have already thought through system exposure and human impact for each asset category, and most organisations have not done that work systematically. Programmes that have invested in the Asset Management pillar (which Episode 3 established as the foundation the others rest on) already hold most of the necessary data in some form. Asset criticality tiers, network exposure classifications, and business function mappings are SSVC inputs under different names. The decision-point answers are not new data; they are existing data organised around different questions. The remaining work is to document Human Impact as a reviewed judgement by system category rather than an analyst’s working assumption. That is not a large undertaking, but it is not nothing either. The first few SSVC evaluations typically surface assumptions that have never been written down.
The same three CVEs clarify how the tree works in practice.
For CVE-2022-37966, the Kerberos elevation of privilege: Exploitation is None, there is no public PoC and no active exploitation in the KEV or observed threat intelligence. For most enterprise environments, System Exposure is Controlled, since Active Directory services are internal by design. Automatable is No for this vulnerability mechanism. The tree routes toward Scheduled or Defer depending on the Human Impact assessment for the affected systems. This is the same conclusion as the numeric model: despite the CVSS 8.1 classification, neither framework recommends emergency priority.
For CVE-2021-26855, ProxyLogon: Exploitation is Active; the vulnerability was subject to widespread exploitation by multiple threat actor groups and appears in the KEV. System Exposure for Microsoft Exchange is Open; Exchange is internet-facing by design. Automatable is Yes; ProxyLogon was incorporated into mass-scanning tooling within weeks of disclosure. Human Impact for an internet-facing mail server in any organisation of meaningful size is Very High, given that email infrastructure handles sensitive communications and often provides authentication pathways into broader systems. The SSVC outcome is Immediate, which matches the numeric model.
For CVE-2023-4966, Citrix Bleed: Exploitation is Active. System Exposure for NetScaler ADC/Gateway is Open; it is a perimeter device deployed specifically to face the internet. Automatable is Yes; session token extraction at scale was demonstrated in active campaigns. Human Impact is Very High for any organisation using NetScaler for remote access, since the vulnerability enables complete session hijacking that bypasses multi-factor authentication. The SSVC outcome is Immediate. Same result as the numeric model.
The agreement across all three examples reflects the fact that both models are reading the same threat reality, just through different lenses. Where they would diverge is worth understanding. Consider a KEV entry on a fully isolated internal test system with no production data and no network connectivity to the live estate. The numeric model classifies it as Emergency; KEV membership overrides everything else. SSVC routes to Defer or Scheduled; Human Impact is Low and System Exposure is Small. That divergence is not a problem to resolve. It is a question worth asking: is the isolation genuine, documented, and verified on an ongoing basis? The numeric model flags the vulnerability as important in principle. SSVC asks what it would actually mean for this organisation if it were exploited here. Both questions are useful, and the disagreement between them is diagnostic.
Choosing Your Approach
Neither framework is universally superior. The right choice depends on where the programme sits in its maturity and what it needs most.
The numeric model is the right starting point for programmes moving beyond CVSS-only triage. EPSS and the KEV catalogue are both free, updated daily, and available as APIs. The three-signal decision logic can be encoded into any SOAR, ITSM, or vulnerability management platform without significant effort. Where asset criticality data is still developing, the numeric model can compensate with rough tier approximations alongside EPSS signals, which produces meaningful prioritisation improvements without the foundational work that SSVC requires. It is also a natural automated pre-screening layer, classifying findings at volume before they reach human review.
SSVC is the right choice for programmes with well-defined asset tiers, documented exposure classifications, and mission impact assessments by system category. At this maturity level, most of the inputs SSVC needs are already in the programme; the framework provides the structured method for turning that data into consistent, auditable triage decisions. When a regulator or senior executive asks why a particular finding was classified as Scheduled rather than Immediate, the answer “Exploitation was None, System Exposure was Controlled, and Human Impact was Medium for this system category” is substantially more defensible than a percentile figure from a probability model.
The hybrid path is how many mature programmes actually operate, and it is a deliberate design choice rather than a compromise. KEV membership and EPSS above a defined threshold serve as automated pre-screening, escalating findings before they reach human review. The findings that arrive at human review are the ones where signals are mixed; those are where the structured reasoning of the SSVC Deployer tree adds most value. This is the same principle Episode 4’s operating model described for tooling: different methods serving distinct functions in the same workflow.
One thread worth pulling here: the inputs SSVC requires (system exposure classification, human impact by asset category, exploitation status) are precisely what a mature asset inventory should already capture. The Five Pillars are not independent capabilities. A programme that invests in Asset Management is building the input layer that every downstream pillar depends on. Sophisticated prioritisation is available only to programmes that have done the foundational classification work first.
Scoring Systems Are Inputs, Not Answers
The series has argued from its opening episode that Vulnerability Management is a decision-making discipline. Scoring systems, whether numeric or decision-tree, are inputs to decisions, not substitutes for them. A programme that uses CVSS alone but applies it with documented reasoning, consistent thresholds, and reliable follow-through will outperform one that has EPSS, KEV, and SSVC integrated across its tooling but lacks shared understanding of what those signals mean or how to act on them. Better frameworks produce better decisions only when the programme knows what to do with the output.
Three Things to Do This Week
Pull the EPSS score for every item currently sitting in your critical backlog. Calculate what proportion would change priority tier under the three-signal model. The number is usually surprising, and it puts a concrete figure on the gap between CVSS-driven prioritisation and exploitation-reality-driven prioritisation.
Check your current top twenty critical findings against the CISA KEV catalogue. Any matches should receive immediate review regardless of their current queue position. The exercise should also prompt a conversation about whether KEV alerting is automated or manual; if it is manual, that is a straightforward process gap to close.
For your Tier 1 systems, sketch the SSVC Deployer tree on paper without looking up any scores. Walking through the four decision points will surface assumptions about system exposure and human impact that have never been written down and that no scanner will surface for you.
The Vulnerability Management Jargon Buster and Quick Reference Guide, a companion to this series, consolidates the terminology, scoring systems, metrics formulas, and frameworks referenced across all six episodes into a single practitioner reference. The next episode in the series examines what happens when a significant proportion of your estate is invisible to traditional scanning, and why cloud environments break the detection assumptions that CVSS, EPSS, KEV, and SSVC all depend on.

Comments