Governance and compliance in emerging tech: What FS leaders must do

In financial services, we've embedded the last set of rules, only to find the goalposts have moved again. That nineteen-point drop in confidence around managing regulation doesn't reflect weak teams. It reflects something deeper: a widening gap between how quickly businesses ship AI-infused change and how quickly second line can evidence control, testing and accountability.

I call it governance lag, and it's real.

Closing the governance gap

Three interconnected shifts are driving this challenge.

First, AI is everywhere now, no longer confined to innovation labs, it underpins underwriting, collections, AML detection, claims triage, and marketing campaigns. The governance surface area has grown tenfold in three years.

Second, regulation is borderless. The EU AI Act applies even when providers sit outside Europe, provided the output is used in the Union. For South African institutions serving EU clients or relying on EU vendors, that extraterritorial reach matters more than most boards realize.

Third, the burden of proof has shifted. Regulators, including SARB and the Prudential Authority locally, expect traceability, intentional human oversight, audit logs, and technical files for high-risk AI. Not tidy narratives crafted after the fact.

To bridge this gap without grinding to a halt, run a two-speed governance model. Keep an express lane for low-risk experiments: sandboxed, synthetic data, quick red-team testing. Use a gated lane for anything touching real customers, the balance sheet, or conduct risk. Same organization, two speeds. The clarity alone reduces friction at go-live.

Pair this with a regulatory radar, but treat it like a product backlog, not a static spreadsheet. Track POPIA, King IV, ISO/IEC 42001, DORA, the EU AI Act. Put them in one place with named owners and run sprints against them. Make compliance operational, not ornamental.

Picture a collections bot pilot. In the express lane you're working with synthetic data, learning what breaks. To scale into production, the gated lane kicks in: risk tiering, a Data Protection Impact Assessment, fairness testing, human-in-the-loop design that's more than cosmetic, customer notice, and live monitoring thresholds. Same project, two lanes. That clarity turns governance from bottleneck to competitive advantage.

Embedding compliance by design

My rule of thumb is simple: if you can't describe an AI use case on one page, you're not ready to build it. That one-pager forces the right conversations upfront—business purpose, data touched, risk tier, required artefacts, approval gates.

Once clarity exists, wiring compliance into delivery becomes practical. Every new use case enters through intake. If risk comes back high, several things become automatic: Data Protection Impact Assessments trigger, fairness testing kicks in for protected attributes, explanation design starts for customers and auditors, and someone names who can intervene and on what evidence.

Data governance must be operational, not theoretical. For South African institutions, POPIA principles (purpose limitation, minimality, security safeguards) should exist as automated tests in your pipelines. If a consent tag is missing, the build fails. If drift breaches thresholds, an alert fires. Turn policy into automated checks. Don't rely on someone's memory.

Here's the uncomfortable part: some vendors won't give you the logs or explainability you need. When that happens, you need contractual teeth: audit rights, incident timelines, termination clauses. Vendor pushback is valuable signal about their maturity and your leverage. This is already in the Prudential Authority and SARB's sights, particularly around operational resilience. DORA will amplify that pressure for EU-linked operations, and it will cascade into African supply chains whether we like it or not.

Managing ethical and operational risks

Bias surprises leaders because it hides in data, in proxies you didn't know were proxies, in business rules that felt harmless ten years ago. Start with parity checks that make sense for your market and product, monitor them in production, and brace for trade-offs. You will balance fairness with profitability in credit and pricing. That tension is real and unavoidable. Name it, decide it once with the right stakeholders around the table, then codify the decision so you're not relitigating it every quarter.

Transparency lands with two audiences. Customers need plain-language reasons and simple appeal routes. Auditors and regulators want evidence: model cards, feature attributions, stability analysis, the technical file the EU AI Act expects for high-risk systems. Build these artefacts into your delivery process rather than bolting them on before an audit.

In financial services, ethical risk isn't abstract. Algorithmic pricing in insurance or automated collections targeting vulnerable customers can breach Treating Customers Fairly principles and trigger conduct findings. These scenarios are already on the FSCA and SARB's radar, and scrutiny is tightening.

Human oversight only works if people have timely signals and clear authority to act. Define thresholds that trigger review, specify what information reviewers see, and document what an override does in the system. Then test it. If your human-in-the-loop is just someone watching a dashboard they don't understand, you haven't solved the problem.

Resilience matters too. Generative tools create new risks (prompt injection, data leakage, output manipulation) that traditional model risk frameworks weren't designed for. DORA raises the floor on operational resilience through incident management, testing, and oversight of critical ICT providers. Even if you're not directly in scope, your European counterparties will expect similar standards when they audit you.

Global rules, local Impact

The EU AI Act and DORA are reshaping governance far beyond Europe.

Scope matters. Build and host a credit model in Johannesburg, but if its decisions affect borrowers in Paris or Frankfurt, the Act applies to you. That's a global compliance footprint whether you planned for it or not.

Supply chains are where most firms underestimate impact. DORA entered application on 17 January 2025, harmonizing ICT risk management, incident reporting, testing, and third-party oversight across EU financial institutions. This shows up in contracts immediately: registers of information, tighter subcontracting rules, threat-led penetration testing requirements pushed down to vendors everywhere, including South African and Middle Eastern suppliers.

Regionally, POPIA and King IV remain the local anchor: POPIA for privacy, King IV for board-level technology governance. In the Gulf, DIFC and ADGM data protection regimes are GDPR-aligned and actively supervised. SDAIA in Saudi Arabia is signaling clear direction on responsible AI through its ethics principles, increasingly shaping procurement standards across the region.

The shared lingua franca across all of these? Traceability, transparency, and testability. If your AI governance can't deliver those three on demand, you're not ready.

Making governance work

The real challenge isn't technical, it's alignment. What does "high risk" actually mean in your environment? Who owns the model inventory and evidence behind it? What satisfies your board versus your supervisor, and when do those expectations diverge?

Once that shared language exists, governance becomes practical. Controls live where teams already work: intake in Jira or ServiceNow, automated checks in CI/CD, live telemetry feeding risk registers. Vendor diligence becomes a living register, not a static PDF. DORA is making this level of supplier clarity the new normal.

ISO/IEC 42001 gives you an organization-wide operating rhythm, so the same governance muscles get used across use cases rather than reinventing wheels every sprint. Most firms start by aligning to the standard without chasing formal certification. Get the bones right first, then decide if the badge adds value. That usually takes six to nine months with a small, dedicated team.

Three questions to ask this week

One. If a supervisor gave us 48 hours to explain our three highest-risk AI use cases, could we provide the technical file, human oversight design, and monitoring evidence without scrambling? Be honest. If the answer is "probably not," that's your starting point.

Two. Do our vendor contracts give us the audit rights, incident timelines, and sub-processing visibility that DORA would expect, even if we're not directly in scope today? Scope has a way of expanding, and renegotiating under time pressure is expensive.

Three. Is our board dashboard tied to actual control testing and live alerts, or is it green dots because nobody's checking underneath? Boards know the difference, even if they don't always say it out loud.

Looking ahead

Supervisors are shifting from periodic reviews to something closer to real-time assurance, and AI-driven regulatory reporting will become mainstream faster than most firms expect. That lowers manual compliance costs over time, but it also raises expectations for continuous monitoring and tighter feedback loops. Boards will start asking for dashboards showing live control effectiveness, not just policy adoption rates or training completion stats.

As SARB and the FSCA deepen their focus on operational resilience and digital conduct, governance stops being a compliance chore and starts becoming a genuine differentiator. The firms that get this right will move faster, take smarter risks, and build greater trust. The ones that don't will spend the next three years firefighting issues that better design would have caught early.

Governance isn't about slowing innovation. It's about proving you can innovate responsibly.

Authors:

Shane Cooper, Head of Digital Advisory

Want to know more?