Compliance as a Side Effect: How a Well-Run VM Programme Satisfies PCI-DSS, ISO 27001, NIS2, DORA, and the UK CSRB
- Christopher Clarkson
- 5 days ago
- 12 min read
Updated: 3 days ago
Episode 10 of the CAXA Technologies Security Operations Series
Most organisations that fail compliance audits for vulnerability management didn’t set out to fail them. Their programmes were just designed for the wrong job: built to satisfy a particular framework rather than to manage the actual risk. After taking VM programmes through PCI assessments, ISO 27001 certification cycles, and DORA risk and oversight reviews across more than a few regulated environments, the same pattern keeps showing up. A programme designed for security is easier to audit, produces better evidence, and quietly satisfies more frameworks at once than a programme designed around any single one of them.
That principle has run through this entire series. This is the episode where the principle stops being an assertion and becomes a worked demonstration: here is what good vulnerability management looks like in practice, and here is how each major framework finds, inside that programme, the specific things it is asking for.
The thing nobody puts in writing
The conversation between security and compliance teams usually goes one of two ways.
In the first, compliance brings a control catalogue and asks how each item is implemented. Security walks them through it, evidence gets gathered, gaps surface and get worked. This is the version that works.
In the second, compliance brings the catalogue and the existing security programme gets bent around it. Scan cadences fall back to whatever the framework permits as a minimum. SLAs end up matching the framework’s stated thresholds rather than what the threat profile justifies. Exception processes are designed for the audit instead of for how the team actually operates. On paper, the result is a compliant programme. In practice, it is a slightly weaker version of the security programme that existed before, with a separate evidence pack to maintain on top.
This is not incompetence. More often, compliance is the function that has the budget approval, the executive attention, and a fixed deadline. Security inherits all three by aligning itself to compliance, and pays for the privilege by inheriting the framework’s design assumptions instead of its own.
There is a third path, and it is the one that holds up. Build the programme around what good security genuinely requires. Document the methodology. Operate the controls. Capture evidence as a by-product of the work itself. When a framework arrives, the answer is to walk through the programme and let the evidence answer the questions. Do this once, properly, and the same answer covers PCI-DSS, ISO 27001, DORA, NIS2, and the UK Cyber Security and Resilience Bill.
The rest of this episode does that walk, framework by framework. The question each time is the same: what does this framework actually require, and what part of a Five Pillars programme is already producing it?
What PCI-DSS v4.0.1 actually asks
The current effective version of PCI-DSS is v4.0.1. The chronology is worth keeping straight, because it explains why teams are dealing with v4.0.1’s bite for the first time this year. PCI SSC retired v3.2.1 on 31 March 2024, published v4.0.1 as a limited revision in June 2024, and retired v4.0 itself on 31 December 2024. That left v4.0.1 as the only active version. Of the 64 new requirements that v4.0 introduced, 51 were future-dated and only became effective on 31 March 2025.
Four of those requirements matter directly for vulnerability management. They are usually treated separately. They are easier to think about together.
Requirement 6.3.3 sets the patch SLA for in-scope systems. Critical vulnerabilities (as ranked by the entity’s own risk-ranking process) are patched within one month of the patch becoming available. Other applicable patches follow within a timeframe defined by that same risk ranking. The standard deliberately does not prescribe what “critical” means. It requires the entity to define it and stick to its own definition.
Requirement 11.3.1 sets the scanning cadence: internal vulnerability scans at least once every three months, plus after any significant change. Resolution and rescan are explicit. High-risk and critical findings are addressed, and rescans confirm the resolution.
Requirement 11.3.1.1 governs everything that is not high-risk or critical. The risk-ranking process from 6.3.1 drives the timeline, and compensating controls are permitted where remediation is not feasible.
Requirement 12.3.1 is the v4.0 change that has caught the most teams off guard. Anywhere the standard says “as defined by the entity,” the entity has to produce a Targeted Risk Analysis explaining the choice. Scan cadence, exception duration, severity threshold, even the definition of a “significant change,” each one needs a TRA.
A Five Pillars programme answers all four of these by default. Pillar 3, Risk Analysis and Prioritisation, is the documented risk-ranking methodology. Pillar 2, Vulnerability Assessment, runs scanning at a cadence well inside the quarterly minimum. Pillar 4, Remediation and Mitigation, runs SLAs that are tighter than 30 days for production-critical systems and explicit thresholds for everything else. Pillar 5, Measurement, holds together the evidence chain from scan finding through ticket through remediation through rescan.
The TRA point is the one many teams trip over. The risk-led methodology Episode 6 laid out, the multi-signal model combining CVSS, EPSS, and the CISA KEV catalogue, is not itself a control. It is the methodology a TRA is meant to document. Write that methodology down once, and the TRA is essentially already there.
When the QSA sits down with the evidence binder, what they ask for is patterned predictably:
The risk-ranking methodology document
Scan reports for the previous twelve months, demonstrating cadence
Authenticated scanning evidence where in scope
Remediation evidence: ticket records or change records showing patches applied within SLA
Rescan evidence
Targeted Risk Analyses for any “as defined by entity” choice
The exception register, with owner, expiry, and compensating control for each entry
A programme that already has Pillars 1 to 5 operating produces all of that without anyone running a separate audit project. The evidence is the natural exhaust of the programme working.
The trap is to invert the relationship. A team that designs scan cadence around the 90-day minimum is scanning quarterly because PCI told them to. A risk-led programme scans continuously because the threat surface demands it, and the quarterly evidence falls out as a subset of what is already being captured.
What ISO 27001 A.8.8 actually expects
Annex A control 8.8 of ISO 27001:2022, Management of Technical Vulnerabilities, is written sparely. Get information about technical vulnerabilities. Evaluate the organisation’s exposure. Take appropriate measures. Three clauses. No timelines. No scoring models. No prescribed cadence.
What a certification audit actually asks for is consistent across certification bodies and well documented in their guidance. The auditor wants to see:
An approved Vulnerability Management Policy or Patch Management Standard with explicit remediation SLAs by severity
A current list of systems being monitored
Historic scan logs showing consistent execution intervals over the last twelve months
Recent scan results showing identified flaws
Rescan reports showing resolution
Tickets closed within the SLA windows defined in the policy
Comparative reporting of date identified versus date closed, cross-referenced against policy SLAs
Documented justifications for any vulnerabilities not fixed (the risk acceptance log)
Management review evidence: minutes from a Management Review Meeting, or a security dashboard presented to leadership within the last six months
Every line on that list is the output of a Pillar that exists for security reasons. Pillar 1 produces the system inventory. Pillar 2 produces the scan history. Pillar 4 produces the remediation evidence. Pillar 5 produces the dashboard and the management reporting. The risk acceptance log is the exception register Episode 4 put inside the operating model.
A programme that uses an EPSS-weighted prioritisation model, tier-based SLAs, and tracked compensating controls is materially more rigorous than ISO A.8.8 actually requires. The standard sets a deliberately low bar: it certifies that you have a process and that the process is operating. Anything beyond that is a security choice the organisation has made, not something the auditor came in expecting to find.
The one place ISO certifications routinely fail is what auditors call the remediation gap: evidence that vulnerabilities were found, no matching evidence they were closed inside the policy’s own deadlines. A programme that measures MTTR by tier and severity, the way Episode 5 set out, does not have a remediation gap to find. The measurement itself produces the evidence.
DORA, in the version that affects practitioners
DORA, Regulation (EU) 2022/2554, has been in force since 17 January 2025. Financial-sector practitioners are operating under it now, whether the surrounding compliance documentation has caught up or not.
Articles 5 to 10 set out the ICT risk management framework. Three of them matter directly for vulnerability management.
Article 5 puts ultimate responsibility for ICT risk on the management body. The board approves the digital operational resilience strategy, allocates budget, and is held to account for the outcomes. This is the same governance pattern Episode 4 placed in the operating model: a senior accountable owner for the decision not to patch.
Article 6 requires the ICT risk management framework to be comprehensive, documented, and subject to regular internal audit. The same pattern again: a programme that is documented for security reasons satisfies a framework that requires it to be documented.
Article 8 is where vulnerability management actually lives. Financial entities identify, classify, and document all ICT-supported business functions, information assets, and ICT assets, along with their roles and dependencies. They continuously identify the sources of ICT risk and assess cyber threats and ICT vulnerabilities relevant to those functions and assets.
Article 8 is Pillar 1 (Asset Management) and Pillar 2 (Vulnerability Assessment) restated in regulatory language. Any programme with an authoritative cloud asset register, an authenticated scanner with appropriate coverage, and a continuous identification cadence has answered Article 8 already.
The piece of DORA that pulls practitioner attention furthest from where it usually sits is the Critical ICT Third-Party Provider designation. On 18 November 2025, the ESAs published the first list: nineteen providers, including AWS, Google Cloud, and Microsoft, alongside data centre operators, telecommunications providers, and specialist financial-sector technology firms. The list will be updated annually.
The CTPP designation matters for VM in two specific ways. First, a financial-entity client of a designated provider is now required to record that dependency in its ICT asset register: relationship class, services in scope, substitutability assessment. The Pillar 1 inventory has to make the CTPP relationship visible, not just the underlying assets. Second, the designated provider’s own vulnerability disclosure cadence and patch SLAs become inputs to the financial entity’s exception process. A vulnerability in a CTPP service that the entity cannot patch itself is a compensating-control conversation, and the CTPP’s published timeline drives it.
The four-hour incident reporting timeline gets a lot of attention, and a fair amount of it is imprecise. The actual rule, set out in Commission Delegated Regulation (EU) 2025/301 (the RTS on incident reporting timelines under DORA Article 19), has two parts. An initial notification goes to the competent authority within four hours of the entity classifying the incident as major, and in any case within 24 hours of the entity becoming aware of it. The piece worth being precise about: a VM programme does not produce that notification. It produces the inputs incident response uses to hit the timeline. The asset register, the blast-radius assessment, and the exposure data feed IR. They do not replace it.
NIS2, fragmented as it is
NIS2 is the framework most likely to produce a headache disproportionate to its content. The substance is straightforward. The operational reality of multi-jurisdiction enforcement is not.
By 1 January 2026, twenty of the twenty-seven EU Member States had completed transposition into national law. The other seven were at various stages of late delivery, and the European Commission has been turning the screws. Infringement proceedings opened against twenty-three Member States on 28 November 2024 for missing the original 17 October 2024 deadline. A reasoned opinion went to nineteen Member States on 7 May 2025. Each step accelerated the rate of transposition, but the result for any organisation operating across more than one Member State is the same: a single NIS2 obligation expressed in slightly different national language, supervised by slightly different authorities, in each jurisdiction.
The one place in NIS2 that names vulnerability management directly is Article 21(2)(e): security in network and information systems acquisition, development and maintenance, including vulnerability handling and disclosure. That is the only mention of the phrase in the entire directive. The rest of Article 21 covers the surrounding cybersecurity risk-management measures: risk analysis policies, incident handling, business continuity, supply chain security, cryptography, MFA, basic cyber hygiene.
Article 20 is where governance lives. Management bodies of essential and important entities approve the cybersecurity risk-management measures, oversee their implementation, and are personally liable for breaches of that obligation. The legal weight behind the senior-accountability point Episode 4 made in the operating model is real. Choosing not to patch a critical-severity finding inside a regulated entity is a board-level decision, and it creates personal exposure if it is not documented.
Article 23 sets the reporting cadence: a 24-hour early warning, a 72-hour incident notification, a one-month final report. Same rule as DORA. Same rule as the UK CSRB framework. Same observation: the VM programme produces the inputs, the IR process produces the report.
A Five Pillars programme satisfies Article 21(2)(e) once the policy and process are documented and consistently operated. The other Article 21 obligations are met by the surrounding security programme, not by VM specifically. The cross-jurisdictional fragmentation does not change what good VM looks like. It just changes which supervisory authority is asking.
The UK Cyber Security and Resilience Bill, where it is now
The UK Cyber Security and Resilience Bill was introduced to the Commons on 12 November 2025, received its Second Reading on 6 January 2026, and completed its Committee Stage by late February 2026. The amended Bill was published on 25 February 2026 and is currently moving toward Report Stage and Third Reading in the Commons before crossing to the Lords. Royal Assent is expected in late 2026, with phased implementation likely to run into 2028.
That timing matters because the Bill is not yet law. Organisations that look likely to come into scope are best off preparing against its stated direction of travel rather than treating any of its provisions as enforceable today. NIS Regulations 2018 are still the active regime in the meantime.
The biggest scope change for VM is the introduction of Relevant Managed Service Providers as a regulated category. Medium and large MSPs (defined as not small and not micro by the standard UK threshold tests) will come into NIS scope for the first time. The definition is jurisdictional but extraterritorial in effect: an MSP providing managed services in the UK is in scope, whether or not it has a UK establishment.
The second change is the critical-supplier designation power. Suppliers to operators of essential services, relevant digital service providers, or RMSPs can be designated as critical suppliers if their failure could disrupt the in-scope service, and designation pulls them into the same regulatory framework. This is the UK’s answer to the same problem DORA addressed with CTPPs. The route in is different: the UK approach is supplier-driven, DORA’s is provider-driven.
For VM specifically, the Bill itself does not introduce new technical obligations. The phrase it uses is the familiar “appropriate and proportionate technical and organisational measures.” The detail will land in secondary legislation and a regulator-issued code of practice, which the NCSC will inform and which will lean heavily on the existing NCSC vulnerability management principles.
The reporting cadence is two-stage: 24-hour initial notification and 72-hour fuller report. Same pattern as NIS2 and DORA. Same observation as before: the VM programme produces the inputs, not the report.
The practical posture, given where the Bill is: an organisation that already runs a Five Pillars programme as a documented, measured discipline is well placed for the Bill’s eventual commencement. An organisation that does not is exposed to whichever obligation lands first, and the first one to land might not be the Bill at all. It might be a procurement requirement from a customer who is already in scope today.
Framework coverage scales with programme rigour
A Five Pillars programme is not a binary state. Episode 11 will set out the maturity model in detail; for now, the point that matters is that framework coverage scales with the rigour of the programme, and the scaling is predictable.
A programme operating reactively, with ad hoc scanning and no formal SLAs, satisfies no framework. Even ISO 27001 A.8.8 requires a documented process. PCI cannot certify a CDE. NIS2 and CSRB obligations are unmet by definition. The first-order task in this state is to document and operate, not to optimise.
A programme that is defined and documented, with scheduled scanning, SLAs by severity, and a documented exception process, satisfies ISO 27001 A.8.8 with evidence. NIS2 and CSRB baseline obligations are met for the items Article 21(2)(e) covers. PCI compliance is achievable for non-CDE systems; the CDE itself usually demands the next step up.
A programme that is measured and risk-led, with multi-signal prioritisation, tier-differentiated SLAs, EPSS-informed risk ranking, and continuous evidence, satisfies all the major frameworks for in-scope systems. QSA-ready evidence drops out as a natural output. DORA Article 8 is answered. The CSRB code of practice will, on any reasonable reading of NCSC’s published direction, find what it is looking for.
A programme that is optimised and continuous, with automated remediation, dynamic SLA adjustment, and continuous evidence pack production, exceeds the current frameworks. The frameworks become lagging indicators of what good practice looks like, rather than the standard against which the programme is measured.
The point is not that the third level is a goal because it satisfies the most frameworks. The point is that good security practice produces this coverage as it matures. Compliance is the readout, not the steering wheel.
Where the episode lands
The compliance frameworks UK and EU organisations face in 2026 are real, simultaneous, and consequential. PCI-DSS v4.0.1 carries QSA assessment and contractual penalty. ISO 27001 certification is increasingly a procurement gate. DORA is enforceable now for financial entities, and the CTPP designations have added a third-party dimension that did not exist before. NIS2 is being supervised, unevenly, across the EU. The UK CSRB will arrive in late 2026 with phased implementation through 2028.
A team that builds a Five Pillars programme around what good security genuinely requires walks into all of these frameworks with the evidence already in hand. A team that builds a programme around any one of them satisfies that one and ends up with a worse security outcome than they started with.
That is not a thesis. It is the readout of having built programmes both ways, more than once, and then watched the audit teams who handle the framework conversations work from the evidence each one produced. The good answers came from the same set of artefacts every time. The artefacts came from the programme operating, not from the audit being prepared for.
Episode 11 takes the staged progression sketched here and turns it into a maturity model across the Five Pillars. The maturity model gives an organisation an honest baseline, a credible roadmap, and a frame for the conversation with leadership about where to invest next.
For now: build the programme. Let the frameworks find what they need in it.
f you sit on the buying side of this conversation, the companion piece on what audit teams actually ask for is here.

Comments