Boardroom checklist for responsible AI adoption

Authored by Manoj Ajgaonkar, Partner, Digital, Trust, Transformation

In August 2025, the Reserve Bank of India released its FREE-AI Framework—a comprehensive blueprint for responsible AI adoption in financial services. The framework, grounded in seven foundational principles and 26 actionable recommendations, emerged from extensive consultations with banks, NBFCs, and fintechs. Among its key mandates: board-approved AI policies, AI-specific risk assessments in product approvals, and enhanced governance mechanisms. The message to financial sector boards was unambiguous—AI governance is no longer optional or delegable. It's a board-level accountability that requires named individuals, clear processes, and the willingness to halt deployments that don't meet ethical standards.

While the RBI's framework addresses financial services specifically, the governance imperative it represents extends across sectors. Boards in healthcare, technology, manufacturing, and professional services are discovering the same uncomfortable truth: AI governance failures don't announce themselves politely. They arrive as regulatory notices, media inquiries, and customer complaints—often simultaneously.

The problem isn't that boards don't care about responsible AI. It's that they've treated it as a compliance exercise rather than a strategic capability, delegating accountability without the expertise to exercise it.

The governance theatre problem

Most organizations respond to AI risk by creating an ethics committee and drafting principles. In our work across BFSI, and technology sectors, we've observed a common pattern: organizations lack a clear roadmap for AI governance implementation. Instead, they produce multiple policy documents—comprehensive in scope, well-intentioned in purpose—but struggle to translate them into operational practice. These frameworks often remain static references rather than living tools that guide daily decision-making.

During risk assessments, we routinely find marketing teams deploying customer segmentation models no one vetted for bias, HR departments using resume screening tools without understanding their training data, and operations groups implementing systems that bypass IT governance entirely. Frameworks that live in documents don't change behaviour. Responsibility without accountability is just aspiration.

What actually works: Distributed ownership

In designing AI governance for one of our clients, we designed the centralized committee model and, embedded accountability across the AI lifecycle—product owners justify use cases against explicit risk criteria before development, data scientists document model limitations in business language, and compliance teams have fail-safe mechanisms authority if concerns emerge post-deployment.

It's a decision matrix forcing explicit trade-offs. When a business unit proposes high-risk AI—algorithmic pricing that could disadvantage protected groups—the framework doesn't say "get ethics approval." It says "demonstrate how you'll detect harm, who monitors performance, and what triggers intervention." No clear answers? The project doesn't proceed.

The OECD's updated AI Principles emphasize that responsible AI requires accountability from all actors involved in AI systems—a shift from aspirational ethics to operational responsibility embedded throughout organizations.

Six questions boards must answer

  1. Who owns each AI system's performance post-deployment? Too many boards approve projects based on accuracy metrics without tracking drift, fairness, or edge cases after launch.
  2. Can you kill an AI project mid-flight? If the answer involves convening committees or "escalating through channels," you have a coordination problem, not governance.
  3. What's our exposure under India's DPDP Act, the EU AI Act, and SEC disclosure requirements? As MIT Sloan Management Review notes, companies treating compliance as a legal department problem systematically underestimate board-level liability.
  4. Do we know which AI systems we're running? In our assessments, we routinely find organizations using many more AI applications than they've formally approved. Shadow AI—tools adopted by departments without central visibility—represents one of the largest governance gaps we encounter.  
  5. Can our people actually challenge AI decisions? At a healthcare client, staff could flag algorithmic treatment recommendations—but the escalation process was so cumbersome most didn't bother. Governance requiring heroic effort fails when you need it most.
  6. What would trigger us to stop using a deployed AI system? Most organizations articulate go-to-market criteria. Few have clear off-ramps with specific thresholds for mandatory review or decommissioning.

Beyond checking boxes

The boards navigating AI successfully aren't those with the most comprehensive policies. They're the ones that embedded accountability into decision-making, made governance simple enough to use under pressure, and recognized that responsible AI isn't about preventing innovation—it's about sustaining it.

We've operationalized this with clients who now treat AI governance as strategic capability, not compliance burden. Their competitive advantage isn't moving fastest on AI adoption—it's moving sustainably, building systems designed to earn trust over time.

The question for boards isn't whether to govern AI responsibly. It's whether you're governing it at all, or just documenting your intention to. Stakeholders are learning to tell the difference.

 

Citations:

https://oecd.ai/en/ai-principles

https://sloanreview.mit.edu/article/ai-related-risks-test-the-limits-of-organizational-risk-management/

https://www.researchgate.net/publication/388928423_The_Role_of_Corporate_Boards_in_Ensuring_Regulatory_Compliance_Best_Practices_and_Challenges

The article was published in IT Voice Media on 29 October 2025. Read here

Want to know more?