Why Your Vulnerability Backlog Is Lying to You
- Christopher Clarkson
- Apr 21
- 4 min read
CAXA Technologies Security Operations Series
Your vulnerability backlog is not one number. It is at least nine numbers, and the one you are probably reporting is the least useful of the nine.
Most vulnerability management dashboards surface a single figure: total open findings. That number goes up when new CVEs arrive, down when tickets close. Leadership watches it trend. Security teams are measured against it. Regulators ask about it. And in almost every case, it tells you less about your actual risk position than the three segmentations underneath it.
The 3x3 problem
A typical vulnerability estate has three severity tiers (critical, high, and medium/low) and three asset tiers (production systems handling sensitive data, internal systems, development/non-production infrastructure). That is nine distinct populations, each with a different risk profile, different SLA obligations, and different business consequences if they breach. The aggregate count collapses all nine into one number.
The aggregate is, at best, a denominator: a count of how much work exists, not a measure of whether that work is the right work or whether it is being done in the right order.
The direction the total is moving tells you almost nothing in isolation. A shrinking total backlog where the critical segment is growing is a worse position than a growing total where the critical segment is shrinking. The total has improved in the first case and deteriorated in the second. The risk position is inverted.
Severity: the direction of the critical segment is the signal
The first useful segmentation is by severity, specifically whether the critical finding count is moving in the right direction independent of the total.
One qualification: “critical” should be defined by the composite scoring model your programme uses, not by CVSS severity alone. Episode 6’s prioritisation model uses CVSS as the severity anchor, EPSS as the exploitation likelihood signal, and the KEV catalogue as a hard floor. A critical CVSS score with no exploitation signal is a different risk than a high CVSS score already appearing in KEV. If your severity segmentation relies on CVSS alone, you are sorting by theoretical severity rather than actual risk.
The question you want the critical segment to answer is not “how many open vulnerabilities do we have?” It is “how many high-risk, likely-to-be-exploited vulnerabilities are open on systems that matter?”
Asset tier: same count, different risk
The second segmentation is by asset tier. Five hundred open vulnerabilities on development infrastructure is a fundamentally different risk position from five hundred on production systems processing customer payments or personal data. Most organisations report both as the same number.
NIST CSF 2.0’s Identify function requires exactly this contextualisation: asset management (ID.AM) and risk assessment (ID.RA) both specify that vulnerability data should be understood relative to the value and exposure of the assets carrying it. A flat count satisfies neither.
What you want operationally is to know the state of your Tier 1 estate independently of your Tier 3 estate. If Tier 1 is protected and Tier 3 is carrying backlog, that is an acceptable posture with a clear remediation path. If the reverse is true, that is a problem regardless of what the aggregate looks like.
I have seen this shift the board conversation considerably. At a recent client, the initial vulnerability report used a single total open count. The number moved slowly because the estate was large, which created a misleading impression of stagnation even while the critical-on-Tier-1 segment was improving rapidly. Restructuring to a severity-by-asset-tier matrix changed the questions being asked from “why isn’t the number going down?” to “what does the critical segment on Tier 1 look like this quarter?” The first question has no useful answer. The second one does.
Age distribution: flow, not inventory
The third segmentation is age distribution. A count of five hundred open vulnerabilities tells you nothing unless you know how long they have been open.
Five hundred findings all under 14 days old, all within SLA, represents healthy operational flow. Five hundred findings open for six months, breaching SLA, is a programme failure. The total count is identical; the risk position is not.
Another client with a significant number of open findings across an expansive cloud estate, the total count was functionally useless as an operational signal. The useful metric was the age distribution within the critical-on-Tier-1 segment: findings within SLA versus findings breaching it. The total became a footnote. The SLA breach count within the highest-risk cell became the operational metric the team managed against.
Young vulnerabilities are only a positive signal if they are within SLA. A large cohort of seven-day-old findings in a programme with a seven-day SLA for critical findings is not a healthy backlog; it is pressure. Age distribution works in conjunction with the SLA framework, not independently of it.
What a defensible record actually looks like
The UK Cyber Security and Resilience Bill requires organisations to maintain a defensible, continuously updated asset inventory and to be able to evidence how vulnerabilities across that inventory were managed. A single aggregate count satisfies neither requirement.
The same applies across PCI-DSS, ISO 27001, and NIST CSF 2.0. Each requires that vulnerability management is contextualised against asset risk. The segmented view satisfies that requirement as a by-product of being operationally useful. You do not need to engineer it for audit. You need it to run the programme.
If your reporting must collapse to a single headline number, make it the critical-on-Tier-1 SLA breach count: findings assessed as high-risk by your composite scoring model, on your highest-value assets, that have exceeded their SLA. That number should be zero, or close to it, and trending downward. Everything else in the backlog is context for that figure.
FIRST’s 2026 CVE volume forecast puts the median at just under 60,000 new CVEs this year. The denominator is growing structurally. A stable total count in that environment could represent either improvement or deterioration. You cannot tell without the segmentation underneath it.
Your backlog is not one number. Make sure your reporting reflects that.

Comments