Who is going to regulate AI in Ireland?

The AI Act allows for varying types of regulators, including market surveillance authorities, fundamental rights authorities and coordinating bodies.

This may result in a complex ecosystem of regulation with one organisation being subjected to oversight and scrutiny from multiple bodies.

Those appointed in Ireland include:

Fundamental rights authorities:

  • An Coimisiún Toghcháin
  • Coimisiún na Meán
  • Data Protection Commission
  • Environmental Protection Agency 
  • Financial Services and Pensions Ombudsman
  • Irish Human Rights and Equality Commission
  • Ombudsman
  • Ombudsman for Children’s Office
  • Ombudsman for the Defence Forces

Market surveillance authorities:

  • Central Bank of Ireland
  • Commission for Communications Regulation
  • Commission for Railway Regulation
  • Competition and Consumer Protection Commission
  • Data Protection Commission
  • Health and Safety Authority
  • Health Products Regulatory Authority 
  • Marine Survey Office

These will be supported by a coordinating office, the AI Office that will act as a central point of contact in Ireland.

What is the difference between a market surveillance authority and a fundamental rights body?

Market Surveillance Authorities act as post‑market regulatory enforcers that police compliance of AI systems on the EU market (incl. high‑risk), investigate, order fixes/withdrawals and sanction non‑compliance. They tackle the technical aspects of complying with the Act and identify miss-use of AI systems.

Fundamental Rights Bodies are public authorities tasked with protecting people’s rights affected by high‑risk AI. They can obtain documentation and ask MSAs to test systems where rights risks are suspected. They are the place to go for people that think they have been, for example, discriminated against by the use of AI.

AspectMarket Surveillance Authority (MSA)Fundamental Rights Body (FRB)
Primary focusCompliance with technical & regulatory requirements of the AI ActProtection of fundamental rights affected by high‑risk AI
ScopeAll AI systems (especially high‑risk and prohibited)High‑risk AI systems impacting rights (Annex III)
Investigative powersBroad enforcement powers: inspections, documentation access, source code access, penaltiesAccess to documentation; may request system testing via MSA
Trigger for actionCompliance concerns, market surveillance findings, complaintsSuspected violation of fundamental rights
Cooperation requirementCoordinates with EU AI Office & other MSAsMust inform MSAs of documentation requests; depends on MSA for testing
Authority typeTechnical/economic regulatorRights‑protection authority (e.g., equality body, human rights institution)

What is happening now?

Despite the fact that the major obligations under the AI Act are now not going to be enforced util December 2027, some of the above mentioned bodies are already regulating for the use of AI under existing legislation.  For example, the DPC has launched investigations into some of the major AI providers including XAI (that created GROK) and OpenAI (ChatGPT) for how they are processing personal data. The HPRA is already carrying out its function as a market surveillance authority under the MDR and the ICDR where those devices have AI as a component and especially where the products are designated as Software as a Medical Device.

What should organisations do to prepare for AI regulation?

Understand where you sit in the AI Act by identifying your operator role and the risk classifications of systems you are going to use.  Establish a governance structure (stand alone or built on top of existing fora) that enables the monitoring of the use of AI and to continually identify the role and risk classification for all new AI systems, however, note that compliance is only part of the AI story.  The organisation may be in full compliance with all applicable laws but still carry high risk through the use of AI.  It must be governed properly in order to protect the business from risk and also to ensure the proper identification of opportunities and allocation of resources to worthy AI projects which are aligned with the organisations mission and values.

So who is going to regulate AI in Ireland?

While Ireland will have a broad network of regulators overseeing AI, every organisation remains responsible for how it develops, deploys and governs AI systems. Businesses must understand their operator roles, assess the risk level of the systems they use and maintain strong internal governance. Regulation provides the framework, but it is organisations themselves that must ensure AI is used safely, responsibly and in line with their mission and values.

AI regulation in Ireland will be shared across a network of market surveillance authorities, fundamental rights bodies and a central coordinating office. Each plays a different role, from enforcing technical compliance to protecting people’s rights. Together, they form Ireland’s oversight framework for the safe and responsible use of AI under the AI Act.

Contact