Digital Omnibus package
On the 19th of November the European Commission released the Digital Omnibus Proposal and the AI Omnibus Proposal designed to reform and simplify digital rules and regulations in Europe.
As generative AI (GenAI) becomes more embedded in the modern workplace, we have seen a growing recognition of its potential – paired with the reality that many businesses are still navigating how to integrate it safely, ethically and effectively.
David O’Sullivan, director in our consulting team, observes that GenAI is already being used in many roles, across sectors including public services, healthcare and financial services. “Many are using GenAI for various aspects of their role and at varying levels. Some roles are more appropriate for GenAI than others, but it can have its place nearly everywhere.” He goes onto mention, “The cat is out of the bag in many ways, so despite policy or lack thereof, many of the workforce are using GenAI anyway, known as ‘shadow AI’.”
That gap between technology usage and policy represents both a risk and an opportunity. O’Sullivan says successful implementation of GenAI hinges on strong governance, effective training and a clear strategy. “As GenAI as well as AI agents become more common, we’ll see changes in how workflows are designed to better leverage these tools. But this needs to be done with transparency and oversight.” A proper strategy that encompasses multiple aspects of AI, including the human aspect, is important for success to be achieved.
Forvis Mazars is seeing high-impact use cases where GenAI is deployed to solve specific problems, from data analysis and content creation to developing strategic frameworks. “Used properly, GenAI can free people up to focus on more value-adding, meaningful work,” he adds. Real thought and consideration need to be given to the use cases and the problem that is trying to be solved, this will allow for a better and quicker return on investment.
However, O’Sullivan warns that AI adoption must align with both legal requirements and corporate values. “A strong policy, governance framework and training are essential. Done right, this supports accountability, compliance and alignment with company goals.” He points to the EU AI Act’s new training requirements, which mandate AI literacy for organisations – a compliance burden, yes, but also an opportunity to enhance internal capability and reduce misuse.
Liam McKenna, a partner in the consulting practice at Forvis Mazars, echoes the need for realism. “Many organisations try to put in place a best efforts policy without fully understanding its impact on them. We see organisations putting in place AI policies which are aspirational initially, but then a use case arises where they can make a business impact, and they start to question the policy. Everyone has a policy until a money making opportunity comes along..” There is a need to have a policy in place, however, for it to be effective it should be married to a well thought through strategy, one that allows flexibility.
McKenna also places GenAI adoption in a historical context. “Previous tech shifts, like the PC or the internet, took years to become commercially valuable. The GenAI cycle is following a similar trajectory—but it’s happening faster. We expect to see major shifts in business models within three to five years.”
Forvis Mazars is already witnessing AI’s tangible impact across sectors. In healthcare, for example, McKenna notes use cases where large language models help medical professionals reduce administrative workload, allowing more time with patients which is where we will be able to see a real world impact of AI. However, he stresses the importance of distinguishing between tasks and roles: “Where the tasks are not fundamental to the primary role there’s an obvious case for deploying AI to augment a person’s role and that’s brilliant. Where the role is something like software development, however, you won’t need as many junior developers.”
In professional services, O’Sullivan sees a similar pattern. “With the right model, refined to the right level to your specific task, they can be used to help create strategies and they can provide really deep analysis. However, someone needs to be responsible for advising clients on those strategies and AI isn’t going to have the level of experience that a senior or strategic-level accountant is going to have.” It is akin to having another voice in the room to offer a broad perspective, but caution must be given to the outputs, everything must be owned by a human.
Ultimately, both McKenna and O’Sullivan agree that responsible AI isn’t just about the tools—it’s about culture, people and purpose. Businesses must build trust at every stage of the AI journey. That means setting clear boundaries, upskilling staff and staying focused on the bigger picture: delivering value in ways that are ethical, transparent and human-centric.
This website uses cookies.
Some of these cookies are necessary, while others help us analyse our traffic, serve advertising and deliver customised experiences for you.
For more information on the cookies we use, please refer to our Privacy Policy.
This website cannot function properly without these cookies.
Analytical cookies help us enhance our website by collecting information on its usage.
We use marketing cookies to increase the relevancy of our advertising campaigns.