Under MLR 2017, a CDD risk rating must reflect a genuine, evidenced assessment of the client's money laundering risk, not a default position taken to satisfy a workflow. The rating needs three things to hold up: a documented methodology, the specific factors that drove the score, and a record of who reviewed it and when.
Most firms can show me a risk rating on a CDD file. Far fewer can show me why that rating is the rating, and not a different one.
That gap is what the FCA looks for now. The Money Laundering Regulations 2017 require firms to take a risk-based approach to CDD, and the regulator's published guidance is unambiguous about what that means in practice. A rating is not the output of the process. It is the conclusion of an argument the firm is making about the client. If the argument cannot be reconstructed from the file, the rating is not defensible, regardless of whether it turns out to be correct.
In a typical IFA or boutique wealth manager, the distribution of client risk ratings tells you most of what you need to know. Eighty percent rated medium, ten percent low, ten percent high, give or take. That is not a risk distribution. That is a workflow distribution. Medium gets assigned because medium does not require the analyst to justify anything. Low forces a defence. High triggers EDD. Medium is the path of least resistance, and most firms have been quietly walking it for years.
The FCA knows this. So does anyone who has sat across the table from a Skilled Person.
Regulation 18 of MLR 2017 requires firms to identify and assess the risks of money laundering and terrorist financing to which their business is subject. Regulation 28 requires CDD measures proportionate to that assessed risk. The proportionality language is doing the heavy lifting. It means the rating itself has to be supported by the firm's risk assessment, and the CDD measures applied have to be supported by the rating.
The JMLSG Guidance Part I sets this out in more practical terms. A risk-based approach is not a way of doing less work. It is a way of focusing work where the risk actually sits. The guidance is explicit that firms cannot rely on customer type alone, jurisdiction alone, or product alone. The rating has to integrate all of them, weighted by the firm's documented risk appetite.
In practice, that means a defensible rating needs to be able to answer three questions.
The first is methodological. What system produced this rating, what version of the methodology was in force when the rating was assigned, and where is that methodology documented?
The second is factual. What specific factors about this client drove the score in the direction it went, and what evidence supports those factors?
The third is procedural. Who reviewed the rating, when, and against what governance threshold did they review it?
If any of those three questions cannot be answered from the file, the rating is exposed.
I have spent the last decade looking at CDD files. The same patterns recur, regardless of firm size.
The first failure pattern is methodological drift. A firm adopts a risk-rating methodology in 2019, updates the wording in a policy document in 2022, and starts applying refined criteria in 2024. By 2026, three different cohorts of clients have been rated against three subtly different rule sets, and nobody has gone back to remediate the earlier ones. Under regulatory scrutiny, the firm cannot defend the older ratings because the methodology that produced them is no longer the firm's stated methodology.
The second is factor flattening. The methodology says the analyst should consider jurisdiction, product complexity, source of wealth, transactional behaviour, and PEP exposure. The form on the analyst's screen has a single dropdown labelled Risk Rating. The five factors collapse into one judgement at the moment of capture, and the file contains no record of how each factor was weighed. The rating is recoverable, but the reasoning is not.
The third is governance theatre. The file shows that the rating was reviewed by an MLRO. The reviewer's name and date are present. What is absent is any indication of what the reviewer actually examined, what tolerance they were applying, or what would have caused them to reject the rating. A signature without a standard is not governance, and a Skilled Person will say so.
The fourth is the worst, because it is invisible until something goes wrong. The methodology in the policy is not the methodology in the system. The system was built around a vendor's defaults, customised twice, never re-documented. The policy describes what the firm thinks the system does. The system does something different. Until the FCA asks for evidence that the two match, nobody knows there is a problem.
A defensible rating has four properties.
It is methodologically traceable. The version of the methodology that produced the rating is recorded against the rating itself, not just in a policy document somewhere else. If the methodology is updated in March, ratings produced in February are tagged as having been produced under the previous version, and the firm has a documented position on whether and when those ratings will be re-run.
It is factor-evidenced. The form, the system, or the file records which factors drove the score and what evidence supported them. Source of wealth corroboration is not a tick box, it is a reference to the underlying documentation. Country risk is not a colour code, it is a reference to the country risk matrix in force at the time, with a date stamp.
It is governance-recorded. The review is not just signed, it is bounded. The reviewer's mandate is recorded somewhere a regulator can find it. If the reviewer would have rejected a rating that crossed a particular threshold, that threshold is documented and the file shows where this rating sat against it.
It is challengeable. The firm has a process for an analyst, an MLRO, or an external reviewer to dispute a rating, and the dispute leaves a trace. Defensibility does not mean the rating is right. It means the firm can show how the rating was arrived at, and what would have to be true for it to have been arrived at differently.
The FCA's 2025 to 2030 strategy puts financial crime supervision at the top of the agenda for regulated firms. Sample testing of CDD files is one of the most common methods used in supervisory visits and Skilled Person reviews. Samples tend to be drawn from across the risk distribution rather than from any single rating band. What the regulator is looking for is not perfection. It is consistency between what the firm says it does and what the firm can show it does.
If the gap between policy and practice is small, the firm gets credit for self-awareness. If the gap is large, every other finding is read in the worst possible light.
The work to close that gap is not glamorous. It is methodology versioning, factor capture, governance documentation, and audit trail discipline. None of it is technically difficult. It is just relentless. The firms that do it before they need to are the ones that come out of supervisory contact with their reputations intact.
The firms that wait are the ones that find out, well into a Skilled Person review, that the rating they cannot justify on a single file is now the focal point of the entire engagement.
| Component | What it looks like in practice |
|---|---|
| Methodology version | Tagged against the rating itself, not just the policy document |
| Driving factors | Recorded with evidence references, not collapsed into a single dropdown |
| Governance threshold | Documented and visible against the file |
| Audit trail | Captures challenges and overrides, not just approvals |
Verigrade is the risk-grading and review platform built to produce CDD decisions that hold up under scrutiny. Methodology mapped to MLR 2017, JMLSG Part I, and FATF Recommendations. Methodology versioning, factor capture, and full audit trail built in.