A CDD file that satisfies a compliance audit is not the same as a CDD file that holds up under regulatory scrutiny. Audits check whether the process happened. Regulators check whether the decisions on the file are defensible: whether the reasoning was recorded, whether the methodology was applied consistently, and whether an external reviewer can follow the logic.
There is a version of CDD that exists in a policy document and a version that exists in practice. The gap between them is wider than most firms would like to admit, and it is not a gap that shows up in a standard compliance audit.
The audit checks whether the file is complete. It checks whether the boxes were ticked, whether the ID was obtained, whether the source of wealth question was answered, whether a risk rating was assigned. If all of those things are present, the file passes. What the audit does not check is whether any of it holds up.
Holding up is different. Holding up means that when the FCA arrives, when a Skilled Person is appointed, when a senior manager is asked to explain a decision that was made three years ago by someone who has since left the firm, there is a clear, documented, defensible record of what was considered, what was not and why, what weight was given to which factors, and who signed off on the outcome. Most CDD files do not contain that. They contain a conclusion with no reasoning.
I spent a decade inside this problem before building anything to address it. As an analyst, as an escalation point, and eventually as Head of Client Due Diligence, I saw the same failure repeat itself at every level. The file would look fine. The checklist was complete. The rating had been assigned. But if you asked why the client had been rated medium rather than high, the answer was usually that the analyst who did the assessment had left, the rationale had never been written down, and the outcome was now the only record of the process.
That is not a compliance failure in the narrow sense. The firm was not ignoring its obligations. It was meeting them, at least in the way they are typically measured. The problem is structural: the tools used to capture CDD decisions were not built to make those decisions defensible. They were built to document that the process happened.
There is a meaningful difference between those two things. Documenting that a process happened satisfies an audit. Documenting how and why a decision was reached is what holds up under scrutiny. The FCA's financial crime supervisory framework, and its guidance on the standard of CDD it expects, is clear on this point. The regulator wants to see not just that you conducted customer due diligence, but that your approach was proportionate, risk-based, and capable of being explained and defended.
The specific failure point I saw most often was the risk rating. Firms assign risk ratings to clients as part of their CDD process. In principle, the rating reflects the firm's assessment of the risk that client presents. In practice, the rating reflects the path of least resistance. Low requires confidence and documentation to justify. High triggers enhanced due diligence, which is resource-intensive. Medium requires neither. So the distribution of ratings at most boutique wealth managers and IFAs is skewed dramatically toward medium, not because their client base is unusually average in risk profile, but because medium is safe for the person completing the form.
A risk rating that does not differentiate is not a risk rating. It is a filing category. And when the regulator or a Skilled Person examines a book where 80% of clients carry the same rating, the question they ask is not whether the process was followed. It is whether the process was designed to produce defensible decisions or to satisfy an audit.
The version of CDD that survives scrutiny is built differently. It records the reasoning behind every material decision, not just the outcome. It versions the methodology so that a review conducted in 2026 under methodology v1.4 is distinguishable from one conducted in 2024 under an earlier standard. It routes decisions through the right sign-off governance so there is a traceable record of who reviewed what and when. It flags the things that were considered and not pursued, and records why.
This is what we built Verigrade to do. Not to replace the human judgement in CDD, but to give that judgement somewhere to live that produces a record worth relying on when scrutiny arrives.
The firms that will fare best under the FCA's current supervisory focus on financial crime are not necessarily the ones with the most sophisticated processes. They are the ones whose processes were designed from the start to be explained.
Verigrade is the risk-grading and review platform built to produce CDD decisions that hold up under scrutiny. Methodology mapped to MLR 2017, JMLSG Part I, and FATF Recommendations.