AI GenAI

End of the era of technology self regulation

The era of technology self regulation is well and truly over in Europe and is now being replaced by regulation at an EU or national level across the bloc.

Policy makers and senior executive are increasingly aware both of the enormous benefits and opportunities that technology brings but also the threats to safety, democracy,  freedom of expression, electoral processes, critical infrastructure and services and to children and the more vulnerable members of society that can result from technology use and misuse.

Over the last number of years, technology regulation has been introduced in the areas of digital and online safety, financial services and market, AI and cyber security.

Digital and online safety
  • The EU Digital Services Act (& DSA Codes of Practice).
  • The EU Terrorist Content  Online Regulation
  • The EU Digital Markets Act (DMA).
  • The Online and Medai Regulations Act 2022.
Financial services | Forvis Mazars in Ireland
Financial services/markets
  • EU PSD2 (soon PSD3) and Secure Customer Authentication (SCA).
  • The EU DORA.
  • EIIOPA guidelines on ICT.
  • EU AML regulations.
  • The EU markets in Cripto Assets Regulation (MiCAR).
  • Central Bank guidelines.
AI
Artifical intelligence
  • The EU AI Act.
Cyber security | Forvis Mazars in Ireland
Cyber security
  • NIS/NIS2.
  • Cyber security baseline standards.

Organisations and individuals have embraced the use of Artificial Intelligence (AI) and Generative AI (GenAI) and their application is changing the structure and dynamics of industries and companies around the world. The capabilities of machine learning and automation are introducing a new era of efficiency and empowerment in many businesses.

The transformational power of AI and GenAI is clearly established, but what is becoming an increasing business priority is understanding and managing the risks that these technologies bring. A lack of transparency, data usage, intellectual property and copyright breach, AI and model governance, data privacy and integrity. model bias and hallucination can undermine trust impact user adoption and benefit realisation.

EU AI Act

The EU AI Act seeks to address some of these issues and provide a framework within which the risks can be managed.

The AI Act is a European regulation which is the first comprehensive regulation on AI by a major regulator anywhere. The Act assigns applications of AI to risk categories creating obligations for providers and users depending on the level of risk of AI risk qualification:

  • Unacceptable risk - These are AI systems that are prohibited, and the use of such a system will incur maximum penalties. These include systems that impact an individual’s ability to make an informed decision, using dark patterns or subliminal techniques, and systems that exploit someone’s vulnerabilities. These include applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
  • High risk - These systems are listed in the Act, many of which relate to the public sector and law enforcement, but they also include AI systems used in employment, recruitment and credit analysis e.g. high-risk applications, such as a CV-scanning tool that rank job applicants, are subject to specific legal requirements. Parties engaging in these systems are subject to a wide range of obligations.
  • Limited risk - Limited risk includes AI systems with a risk of manipulation or deceit. These AI systems must be transparent, and humans must be informed about their interaction with the AI.
  • Minimal risk - Minimal risk includes all other AI systems not falling under the above categories.

The AI Act stablishes a common regulatory and legal framework for AI within the EU. It came into force on 1 August 2024, and its provisions will come into operation gradually over the 36 months after that date.

As industries and businesses adopt AI/ GenAI, they need to be able to navigate their obligations under the AI Act whilst harnessing the transformative power that these technologies can bring.

AI at Forvis Mazars

AI presents opportunities for organisations to transform how they do business, enable economies of scale, reach new markets, reduce costs, and reap a variety of other benefits. Organisations wanting to unlock these benefits should do so in a manner that manages the associated risks.

Our team take an ethical and responsible approach to the development of AI frameworks in order to ensure that the adoption of AI will have a long-lasting positive impact on both business and society.

We take a principle-based approach, that aligns with those agreed in the AI Act:

  1. Human agency and oversight.
  2. Technical robustness and safety.
  3. Privacy and data governance.
  4. Transparency.
  5. Diversity, non-discrimination and fairness.
  6. Social and environmental wellbeing.

Our services

We support our clients across a range of AI/ GenAI related assurance, risk and advisory services as follow.

Assurance

Risk & compliance

Advisory

  • Digital trust (DMA, DSA, Codes)
  • Systemic risk assessment
  • Algorithmic auditing
  • Cyber security AI audit
  • LLM Pen testing (red teaming)
  • Third party assurance
  • AI Act Gap Analyis
  • AI Act Implementation
  • AL operational risk assessment 
  • AI risk training (3rd line)
  • Cyber security AI risk assessment
  • AI strategy
  • AI policy development
  • AI as part of business strategy/transformation/IT strategy
  • Cyber security AI advisory
Contact
Stephanie Dossou or Dera McLoughlin
Contact
Liam McKenna or David O'Sullivan
Contact
Stephanie Dossou or Dera McLoughlin

Contact