The evolving role of the DPO in AI governance
As artificial intelligence becomes more deeply embedded across public sector organisations, the role of the Data Protection Officer (DPO) is evolving rapidly.
Once primarily focused on GDPR compliance, privacy notices and breach management, DPOs are now increasingly expected to contribute to broader discussions around AI governance, risk management, transparency and accountability.
This shift was a central theme during a recent Forvis Mazars roundtable with public sector data protection professionals in Dublin. Participants described how AI initiatives are accelerating across organisations, often driven by wider digital transformation programmes, vendor-enabled AI functionality and growing organisational pressure to improve efficiency and service delivery.
However, while adoption is accelerating, governance structures are not always keeping pace.
Participants consistently highlighted the growing demands being placed on privacy teams. DPOs are increasingly asked to review AI-enabled projects, contribute to governance forums, support impact assessments, advise on transparency obligations and help organisations interpret emerging regulatory requirements under the EU AI Act and related legislation.
In many cases, DPOs are becoming the default point of contact for AI-related concerns simply because AI systems are heavily data-driven and often involve personal data processing.
While participants recognised that DPOs possess many of the skills needed to support responsible AI deployment, there was also concern that some organisations are relying too heavily on privacy teams without establishing clear ownership structures for AI governance.
A recurring message throughout the discussion was clear: DPOs should support and challenge AI deployment, but they should not become operational owners of AI systems.
Maintaining DPO independence emerged as one of the most important governance considerations.
Participants noted that as organisations establish AI steering groups and governance committees, DPOs are increasingly invited to participate. While this involvement is valuable, organisations must ensure that oversight responsibilities do not become blurred.
Implementation responsibility typically sits with IT, digital transformation teams or operational business functions, supported by information security teams for technical controls. The DPO’s role remains advisory and oversight-focused, helping organisations identify risks, assess compliance obligations and embed accountability into decision-making processes.
One of the strongest points of consensus from the roundtable was that DPOs are often involved too late in AI initiatives.
Participants described scenarios where technologies had already been selected, piloted or even deployed before privacy teams were consulted. This creates unnecessary risk and often results in delayed remediation work, additional costs or governance gaps.
Early engagement enables organisations to:
Participants emphasised that this challenge is not unique to AI projects, but the pace and complexity of AI adoption make early involvement even more critical.
While AI has existed for decades, many participants acknowledged that organisations are still developing the practical skills needed to govern it effectively.
DPOs often feel expected to assess complex AI-related risks without having the same technical understanding they may possess for more traditional technologies. Participants highlighted the need for practical, plain-language training that bridges legal, operational and technical perspectives.
There was also strong agreement that AI literacy cannot sit solely within privacy teams. IT, information security, procurement, legal and operational teams all need a shared understanding of how AI systems function, where risks arise and how governance responsibilities interact.
The discussions highlighted that the DPO role will continue to evolve as AI adoption accelerates across the public sector.
This evolution presents both challenges and opportunities. Organisations that establish clear governance structures, preserve DPO independence, invest in cross-functional skills and involve privacy expertise early in AI initiatives will be better positioned to deploy AI responsibly while maintaining public trust.
For many organisations, the immediate priority is not simply adopting AI tools, but building the governance foundations needed to support them safely and effectively.
This website uses cookies.
Some of these cookies are necessary, while others help us analyse our traffic, serve advertising and deliver customised experiences for you.
For more information on the cookies we use, please refer to our Privacy Policy.
This website cannot function properly without these cookies.
Analytical cookies help us enhance our website by collecting information on its usage.
We use marketing cookies to increase the relevancy of our advertising campaigns.