Laying the right foundations for AI transformation

AI increasingly occupies a central role in digital transformation initiatives. For many companies, however, large-scale deployments are not yet a reality. Some are preparing to launch AI transformation projects. Others are adjusting their approach after initial experiments generated mixed results. How should organisations approach the opportunities created by a transformative technology?

AI is changing the way that businesses think about digital transformation. For example, half of C-Suite barometer 2026 respondents believe that AI has become a leading determinant of successful digital transformation. For the vast majority of businesses pursuing transformation, AI is seen as either a priority investment (48%) or is receiving investment within their strategy (41%).

Numbers like these suggest that AI adoption is near-universal. In reality, however, progress is more variable. In late 2025, for example, industry analysts noticed signs of an apparent reset in corporate AI investment. Many projects were failing to progress beyond proof of concept. Poor quality data was fouling up AI systems and ageing IT infrastructure was struggling under significantly increased workloads, In addition, demonstrating a return on AI investments became a challenge for many.

Global Consulting Leader Jakob Haesler

“Successful AI transformation depends on three key requirements. The first is data governance, which puts you in control of the raw material. Success also depends on organisation: how you combine centralised control of data and infrastructure with distributed understanding of pain points and business processes. Finally, value creation means scalability. You have to be able to press a button and scale the solution once you know it works.”

Jakob Haesler Group Consulting Leader, Forvis Mazars Group

The passage of time allows us to see this episode for what it was: part of a learning process that is enabling companies to adopt an increasingly mature approach to AI deployment. Corporate investment in AI has continued on its upward trend. The ‘reset’ was limited to companies who realised that their early enthusiasm needed to be tempered with a more systematic approach to preparing for AI deployment.

As successive waves of companies deploy AI, more will find themselves periodically pulling back to reassess their options and refocus on fundamentals. To avoid this fate, we recommend that organisations planning to deploy AI engage in sustained efforts to improve readiness in the following key areas: data governance, technology infrastructure, workforce preparation and disciplined use case identification. Doing so will minimise the opportunity costs of an AI reset and the associated risks of disillusion.

Responding to the challenge of AI-ready data

Multiple studies have confirmed that poor-quality data is one of the most frequent causes of AI project failure. Those who have witnessed this problem, like Michael Fried, Partner at Forvis Mazars US, often note how companies diagnose the challenge in similar ways: “We moved ahead rapidly to build a great AI solution, but then had to go backwards to correct the data before we could start training our solution.”

“Everyone wants to jump to the solution. Everyone wants to discuss how to leverage AI. No-one wants to do the boring thing, which involves taking a break and cleaning up your data. Yet in nine out of 10 of the conversations we have with companies, they will admit that they have a data problem and they need to fix it.”

Michael Fried Partner, Business and Technology Consulting, Forvis Mazars US

Lack of sufficient data to train AI models is a common challenge. So is ‘noisy’ or ‘dirty’ data that contains missing data points, typos and inconsistent date formats. In the absence of metadata (‘data about data’) attached to specific content and datasets, training Large Language Models (LLMs) can take longer and become more expensive.

Because AI systems are designed to be used continuously, data quality is not a one-off concern. The software that feeds data to AI systems through a company’s network also needs to be able to clean, label and transform that data on a continuous basis. Another key risk that requires continuous monitoring after deployment is model drift. The term describes how AI systems sometimes start to produce unexpected or unreliable results. Often, the causes involve data inputs that that vary from those on which the system was trained. These variations can be caused by events in the real world, like seasonal changes in customer behaviour which an AI model has witnessed previously. Alternatively, the cause can lie with a change in the data source, for example as a result of a sensor being replaced or re-calibrated on a factory floor.

The ideal starting point for an AI project involves access to well-governed data. However, this is a relative rarity. On a project-by-project basis, data cleaning and automated metadata insertion can remedy challenges with specific datasets. In the long term, however, successful large-scale deployment requires organisations to improve their approach to data governance in a systematic fashion.

Investing in the technology required by new workloads

Like a supercar navigating a bumpy dirt road, AI systems struggle to cope with ageing technology infrastructure. An existing IT system that takes multiple seconds to respond to requests is a poor match for the much faster responses of an AI system. Access to data is another challenge. Legacy data systems, for example, frequently trap datasets within isolated silos that cannot communicate in real time with high-velocity AI systems. Rectifying this frequently involves not just cleaning the data, but extracting it and migrating it to a secure, modern data environment.

Most companies pursuing AI projects will adopt a mix of on-premises infrastructure (for data privacy and rapid processing) and public cloud (for model training and scalability). In addition, many will need to explore running AI algorithms in a decentralised way at the edge of big networks. Use cases in manufacturing, logistics and healthcare that involve real-time decision making or remote and disconnected locations are primary candidates for this kind of edge computing capacity.

“The experience of deploying advanced use cases like agentic AI is often an eye-opening experience for businesses. Many have found that their technology infrastructure is not well positioned for large-scale AI usage. So they have repurposed some of their investment, using it to modernise legacy systems. They are not pulling back. They are simply addressing the foundational requirements for AI.”

Jared Forman Principal, Forvis Mazars US

New technologies encourage new security threats. AI is no exception. In particular, organisations need to guard against data leaks and adversarial threats designed to modify data inputs or extract sensitive data. Cybersecurity monitoring needs to occur in real time rather than periodically because cyber attacks powered by AI can cause damage in seconds, while traditional detection systems can take minutes or even hours to detect threats. In addition, organisations need to understand the risks created by specific AI software and define acceptable use cases in each case.

Building a culture of AI literacy

Among employees, AI typically evokes a mixture of optimism and curiosity alongside undercurrents of anxiety. According to a recent UK study, many would embrace third-party AI chatbots more enthusiastically if they received better training. Around half report an absence of AI guidelines from their employer, a finding that roughly aligns with other sources. It is worth noting that attitudes to AI often differ radically depending on age, gender, job role, industry and geographical location. In the context of in-house AI deployment, training and change management continue to receive less attention than they should. Only 16% of senior business leaders surveyed for C-suite barometer 2026 described these areas as priories for investment within their digital transformation strategy.

“If your employer wants to move ahead with AI but employees are scared to even tell you they are using AI for fear of losing their job, you have a problem.”

Michael Fried Partner, Business and Technology Consulting, Forvis Mazars US

Employers need to address the human factor because employees with superior AI literacy work more productively and communicate more effectively. They are also more likely to adopt, rather than resist, digital transformation initiatives that introduce AI into their workflow. Our advice to companies involves the following instructions to encourage safe and productive adoption of AI.

  •  If employers do not publish acceptable use guidelines for using third-party AI tools, employees will either refuse to use AI or conceal their usage, potentially creating hidden security risks. In both of these scenarios, employers lose out.
  •  Communicate your compliance and risk procedures to employees. Be open about the datasets used to train AI models. Platforms surrounded by transparent explanations are more likely to be trusted and therefore used. 
  • Centralised approaches to transformation often miss vital details relating to workflow, creating the risk of sub-optimal business process redesign. For most organisations, the best approach is a hub-and-spoke model that combines centralised resources and governance (the hub) with decentralised execution (the spokes).
  •  Creating a workplace in which AI is regarded as a collaborative tool rather than a threat or irrelevance will pay dividends in the long term. Empower employees to become experts by offering training, recognise their skills and expand participation in transformation initiatives as widely as possible.

Mapping the right use cases

No industry-standard list of use cases exists to help you identify which AI projects to pursue. What your competitors are doing is not necessarily as relevant to your situation as you might think. Instead, the use case scenarios you select for AI deployment need to be squarely based on two considerations: the urgency of specific challenges your company faces and the improvements that AI can deliver.

 

“Calculating the potential ROI of large-scale AI-driven transformation is not straightforward, especially when it comes to capturing the second-order benefits of risk reduction and innovation velocity. A lot of companies are simply moving fast without a robust ROI framework. They lack the means to measure success when a pilot is complete or the project goes into production. Often, these companies end up questioning the benefits of their investment in retrospect.”

Jared Forman Principal, Forvis Mazars US

 For example, a company that has recently implemented new Accounts Payable (AP) technology may only generate a modest return by pursuing AI as an additional upgrade. By contrast, a rival company, similar in all other respects, that has not recently upgraded its AP system may benefit significantly by deploying AI. To identify your priorities, follow a disciplined process for identifying use cases that combines the following approaches:

  • Focus on high levels of alignment with strategic goals and clear value creation opportunities in key operational areas (e.g. revenue generation, operational efficiency, customer experience).
  • Break down departmental workflows to identify significant bottlenecks and pain points where AI is likely to deliver the largest productivity gains. Simplifying processes that have become convoluted over time typically results in better long-run outcomes when AI is deployed.
  • Prioritise use case scenarios by weighing business impact against technical feasibility (e.g. complexity, skill requirements, availability of data, technology upgrades required).
  •  Consider ROI calculations carefully. What will success look like? Assessing productivity gains in the case of knowledge workers equipped with AI systems is notoriously challenging, for example. Well-defined use case scenarios generally yield more reliable ROI calculations that rely upon hard metrics.

In the early stages of AI development, target quick wins that involve low complexity and generate high levels of value. Once you have demonstrated success, trust will follow, alongside the ability to support more complex deployments.

Preparing for what’s next

AI is a transformative technology, but like most innovations, it relies on an ecosystem. Success requires specialist skills and new approaches to governance and security. AI places significant demands on supporting technologies and the data that it consumes. Business processes need to be painstakingly redesigned. In addition, companies that encourage AI fluency and data literacy ahead of deployment will find that the path to adoption unfolds more smoothly.

The ‘reset’ of late 2025 turned out to be a familiar story. Amid excitement in the equity markets, valuations moved up the curve with remarkable speed. Many companies started to feel a need to match this apparently rapid pace of change. However, some discovered that the process of innovation can be far from easy.

“Today, the number of solutions out there is remarkable. But consolidation is coming in the AI software market. It will take a few years, but it will happen. When it does, enterprise AI platforms will start to emerge. This will make it easier for organisations to say: ‘That is our platform’.”

Michael Fried Partner, Forvis Mazars US

Enterprise innovation has its own dynamic. It is typically a marathon rather than a sprint. Strategies will vary. If early mover advantage is important, particularly in sectors being disrupted by AI, companies will adopt the technology aggressively. Others will largely resist the need to design and build their own solutions for now. Some will wait for major technology vendors to incorporate AI into their offerings. Many will continue to prepare the ground for AI, waiting for the market to consolidate and proven solutions to emerge. An even more cautious ‘wait-and-see’ approach will benefit those facing less competitive pressure.

The key challenge for business leaders involves identifying the right strategy and creating the space to act at the right moment. For companies prepared to proceed, the time to pursue the AI opportunity is now. For others, the preparations required by enterprise-grade AI deployment should take precedence.

CES insight lens: AI transformation

Each year, the Consumer Electronics Show (CES) attracts 150,000 attendees to giant exhibition halls in Las Vegas to examine thousands of new products and services. CES is a vast shop window for technology and consumer electronics industries. What did this year’s show tell us about emerging trends in consumer AI adoption?

For several years now, CES has acted as a grand stage for the consumer-facing claims of AI. However, from the very first days in Las Vegas this year, a change in tone was apparent. Fewer spectacular announcements, fewer futuristic demos, fewer oversimplified narratives. In their place, vendors were more focused on winning CES Innovation Awards for AI-integrated products across a wide range of domains, from digital health to logistics, from mobility to retail. At this year’s event, AI was not presented as a standalone promise or a miracle product. It had become something to be judged on the basis of user experience, usefulness and robustness.

But this year’s edition of CES signalled the end of a fantasy involving fast, linear, frictionless technological progress. Increasingly, innovation is no longer envisaged as a sprint. It is becoming a balancing act in which deployment must contend with computing costs, energy availability, talent scarcity, industrial dependencies, regulatory frameworks and social acceptability. Differentiation no longer emerges from a single breakthrough technology, but from the ability to orchestrate complex systems, manage critical dependencies and make clear‑eyed trade‑offs between ambition and sustainability.

A system, by definition, is never built alone. As a result, vendors at CES 2026, positioned themselves as the orchestrators of AI ecosystems. The role of consumers is changing too: in purchasing AI products and services, they do not purchase finished product. Instead, they buy into living ecosystems.

In many ways, CES 2026 was a coming-of-age event for the AI sector. In a changing world, the hyperbole of the past was visible in the rear-view mirror. The challenge is no longer to convince, but to deliver. For many enterprises implementing AI transformation, the feeling will be distinctly familiar.

Our expert