Past Business Reviews and section 166 exercises fail at the same structural points in almost every programme. The failures are not analytical, they are organisational: scope creep, weak rules-in-force documentation, inconsistent quality assurance, and audit trails built for internal use rather than external review. Understanding these failure modes before the work begins is the only reliable way to avoid them.
Past Business Reviews, s166 reviews, consumer redress schemes, and thematic Consumer Duty exercises share a common structural characteristic: they are operationally complex in a way that is difficult to appreciate from the outside and only becomes apparent once the work is underway. By that point, the choices that determine whether the programme delivers a defensible outcome have already been made, usually implicitly, and are difficult to reverse.
Having led large-scale remediation programmes from inside a regulated firm, I saw the same failure patterns repeat themselves regardless of the type of review, the size of the population, or the seniority of the team running it. The failures were not random. They were structural, and they appeared at the same stages in almost every programme. Understanding them before the work begins is the only reliable way to avoid them.
The first and most consequential failure is the rules-in-force problem. Remediation reviews are retrospective by definition. The question is not whether a firm's conduct meets the standard expected today, but whether it met the standard expected at the time the relevant business was conducted. That sounds obvious. In practice, most remediation frameworks apply contemporary rules to historical conduct without building a clean mechanism for tracking which version of which rule applied at which point in the review period.
This matters enormously in programmes that span regulatory pivot dates. A review covering the period from 2018 to 2024 crosses the Consumer Duty implementation date of July 2023, the FOS interest rate change from 8% simple to base rate plus 1% in January 2026, and multiple iterations of COBS and CONC guidance that changed how suitability and affordability were assessed. A programme that applies a single contemporary standard to that entire period will reach incorrect outcomes, some in the firm's favour and some against the consumer, and none of them defensible if a Skilled Person or the FCA examines the methodology.
The rule versioning problem is compounded by redress calculation. Remediation programmes typically involve financial compensation, and the quantum of compensation depends on the rules in force at the time of the detriment, the interest basis applicable at different points in the redress period, and the specific methodology applied to the product in question. These calculations need to be consistent across the population, traceable to a documented methodology, and reproducible if challenged. Spreadsheet-based redress calculation fails on reproducibility. It also fails on consistency, because the same formula applied by different reviewers to slightly different data inputs produces different results, and the programme ends up with a distribution of outcomes that cannot be explained.
The second structural failure is QA architecture. Every remediation programme has a QA process. Most of them are designed for the wrong purpose. QA in a remediation programme is not primarily a quality gate. It is the mechanism that produces the evidentiary record demonstrating that the programme was run to a consistent standard. When the FCA or a Skilled Person reviews the programme, the question they are asking of the QA record is not whether errors were caught. It is whether the QA process was proportionate to risk, consistently applied, and capable of producing a reliable view of outcomes across the population.
QA that applies the same intensity to straightforward cases as to complex ones is not risk-proportionate and wastes resource on the cases that need it least. QA that is applied inconsistently across reviewers produces a record that looks like a quality gate but functions as a random check. QA that is documented in free-text notes rather than structured fields cannot be aggregated, cannot be reported against, and cannot support a robust view of programme outcomes. Each of these failures is common. Each of them undermines the defensibility of the programme as a whole.
The third failure is outcome determination methodology. Remediation programmes involve making a determination for each case in the population: was there detriment, what was its nature, and what is the appropriate redress? These are judgement calls, but they need to be structured judgement calls that produce consistent outcomes across the population and can be explained by reference to a documented methodology.
The most common failure here is that the outcome determination criteria are not documented precisely enough before the work begins. Teams start reviewing cases, edge cases emerge, decisions are made on the fly, and the programme develops an informal case law that is understood by the experienced reviewers and not by anyone who joins later. By the time the programme is a significant way through the population, there are systematic inconsistencies in how borderline cases have been treated, and resolving them requires re-work that could have been avoided.
The second common failure is that the outcome determination is not linked to a versioned methodology in a way that would allow a regulator or Skilled Person to reconstruct the basis on which any given case was decided. If the methodology changes mid-programme, which it will, because edge cases always force refinement, there needs to be a clean record of which cases were assessed under which version and why. That record rarely exists in programmes that were not set up to produce it.
For consultancies and Skilled Person-capable advisory firms running these programmes, the operational challenge is building infrastructure that handles rules-in-force versioning, risk-proportionate QA routing, structured outcome determination, and full audit trail generation at the case level, without that infrastructure becoming so complex that it slows the programme down or requires a separate technical team to maintain.
That is the problem Veratum was built to solve. It structures the review process itself, not just the documentation of it, so that the outputs are defensible at every stage rather than being made defensible retrospectively through a documentation exercise after the work is done. The methodology is versioned and applied consistently. QA is routed proportionately to risk. Outcome determination is structured and linked to the applicable rules. The audit trail is built at the point of decision, not reconstructed from notes afterwards.
The firms that run remediation programmes most effectively are the ones that invest in the infrastructure before the work begins rather than discovering what they needed once the failures have already compounded. In a programme covering tens of thousands of cases, the cost of structural failure is not measured in individual case errors. It is measured in the credibility of the programme as a whole, and the cost of putting that right once the regulator has seen it.
Veratum is a specialist remediation review engine for s166, Past Business Reviews, consumer redress schemes, and thematic Consumer Duty exercises. Rules-in-force logic, proportionate QA routing, full audit trail.