AI adoption in the public sector: building governance and trust at scale

Artificial intelligence is rapidly becoming part of the public sector technology landscape. From automation and information discovery to service delivery and operational efficiency, organisations are increasingly exploring how AI can support transformation programmes and improve outcomes.

However, as adoption accelerates, governance frameworks are struggling to keep pace.

This was a central theme during a recent Forvis Mazars roundtable attended by public sector data protection professionals from across Ireland. Discussions focused on how organisations are balancing innovation with accountability, how governance structures are evolving and what practical challenges remain as AI adoption expands.

AI adoption is already underway

One of the clearest insights from the discussion was that AI deployment is already happening in many organisations, whether formally recognised or not.

Participants described a growing “shadow AI” environment where:

  • Vendors are embedding AI capabilities into existing products.
  • Staff are experimenting with publicly available AI tools
  • Business units are piloting AI-enabled solutions outside established governance structures.

In many cases, these developments occur faster than organisations can update policies, governance frameworks or oversight processes.

Participants noted that privacy teams are frequently becoming aware of AI usage after tools have already been trialled or implemented, rather than during the planning stage.

Governance structures remain immature

While there is strong demand for AI adoption across the public sector, many organisations acknowledged that governance maturity is still developing.

Participants identified several common challenges:

  • Unclear ownership of AI systems.
  • Overlapping responsibilities between DPOs, IT, information security and operational teams.
  • Limited AI expertise across organisations.
  • Inconsistent governance processes.
  • Legacy systems and fragmented data environments.

There was strong agreement that organisations need clearer operational models to support responsible AI deployment. This includes defining accountability, establishing governance forums, clarifying escalation paths and ensuring appropriate oversight mechanisms are in place.

Participants also stressed that governance should not become a barrier to innovation. Instead, effective governance should enable organisations to adopt AI confidently while managing risks appropriately.

Regulation is expanding quickly

The regulatory environment surrounding AI is becoming significantly more complex.

In addition to existing GDPR obligations, public sector organisations must now prepare for requirements introduced through the EU AI Act and wider digital regulation. Participants noted that these overlapping frameworks are creating new compliance challenges, particularly around:

  • Transparency obligations
  • Impact assessments
  • Accountability requirements
  • Record keeping
  • Human oversight
  • Fundamental rights considerations

Many organisations are now managing multiple forms of assessment simultaneously, including DPIAs, AI impact assessments, and fundamental rights impact assessments.

Participants highlighted the importance of creating a coherent, integrated approach rather than treating these assessments as separate compliance exercises.

Data quality and legacy systems remain major barriers

Several participants noted that AI adoption is heavily dependent on data quality and governance maturity.

Poorly structured information, fragmented systems, and outdated records limit the effectiveness of AI tools and increase governance risks. Legacy technology environments also make implementation more complex, particularly where organisations lack clear visibility over existing data holdings.

As one participant summarised during the discussion: poor data results in poor AI outcomes.

This reinforces the growing importance of broader data governance initiatives alongside AI adoption programmes.

Building trust alongside innovation

A recurring theme throughout the roundtable was the importance of maintaining public trust.

Public sector organisations operate in highly scrutinised environments where transparency, fairness and accountability are critical. Participants agreed that AI governance must therefore extend beyond technical implementation to include ethical considerations, explainability, oversight and clear communication with the public.

The organisations most likely to succeed will be those that:

  • Involve governance functions early.
  • Establish clear accountability structures.
  • Invest in AI literacy and training.
  • Strengthen data governance foundations.
  • Maintain meaningful human oversight.

AI adoption across the public sector is likely to accelerate significantly over the coming years. The challenge now is ensuring governance and trust develop at the same pace.

Contact