MIT recently published a study showing that 95% of AI initiatives fail to achieve ROI (State of AI in Business 2025, MIT NANDA).
The findings themselves were unsurprising. What stood out was how many technology leaders and executives had expected AI to act as a magic bullet, a shortcut to transformation rather than a discipline that still requires execution.
To bridge this disconnect, I am writing a short series of articles that connect three perspectives:
First, let’s agree on what “failure” actually means:
In my experience, a successful AI initiative must meet all three of the following criteria. This definition aligns closely with how MIT evaluates success in its study:
That’s it. Nothing more. Nothing less.
By this definition, 95% of AI initiatives fail, based on MIT’s analysis of roughly 300 enterprise programs.
Interestingly, MIT also reports that initiatives which do succeed generate, on average, $1.2M in annual benefit within 6 to 12 months of launch.
The prize is very real. The capture rate is not.
Let’s find out why.
Failure pattern
Organizations attempt to redesign and automate large portions of the enterprise in a single, sweeping AI-driven effort. The initiative is framed as a broad “AI transformation” that will modernize everything at once.
Why it fails
This approach fails because it misunderstands both AI capability and enterprise reality.
Most enterprise workflows are not clean, linear, or well-documented. They are the result of years of incremental changes, exceptions layered on top of exceptions, and human judgment filling gaps that no system ever addressed properly.
AI struggles in environments where:
When organizations try to apply AI across all of this complexity at once, scope explodes, timelines stretch, and nothing reaches production. Teams spend months debating architecture, models, workflows, and governance while the business sees no tangible results.
How to avoid it
AI, at its current level of maturity, works best when applied to narrow, clearly defined workflow segments where rules are reasonably simple and stable.
To achieve results with AI, teams should:
Failure pattern
CTOs rely exclusively on internal IT and engineering teams to design, build, and deploy AI systems.
Why it fails
The result is prolonged experimentation without any production-ready outcome
How to avoid it
· Engage an experienced AI integration partner to lead and own the initiative end-to-end, from roadmap planning through production delivery
· Hold the engagement accountable to business outcomes and ROI, not technology adoption or platform rollout
· Treat AI Infusion as an execution program with clear owners, milestones, and economic targets, not an internal research or experimentation effort
MIT’s data shows that initiatives involving external partners are roughly twice as likely to succeed
Failure pattern
Organizations select an AI product or platform and expect it to optimize enterprise workflows.
Why it fails
The AI product ends up sitting on top or on the side of the workflow rather than being deeply embedded and driving it.
How to avoid it
AI tools are not business solutions. They are components.
An expert AI Infusion team will do the following:
· identify specific workflows and domains within your business that are suitable for AI Infusion
· prioritize them by ROI and define a delivery roadmap
· propose and orchestrate appropriate AI tools, APIs and products (typically a lot more than one)
· integrate those to be seamlessly embedded into your systems and workflows
· implement processes that ensure that AI is constantly learning and improving
Failure pattern
AI investments concentrate on highly visible, client-facing use cases such as chatbots, personalization, and sales enablement.
Why it fails
As a result, 50-70% of these initiatives are attention-grabbers and money-losers, a pattern also clearly reflected in the MIT study.
How to avoid it
MIT’s findings and real-world experience both point to back-office AI Infusion as the most reliable source of returns.
Failure pattern
Organizations encourage employees to use ChatGPT or Copilot and consider this their AI strategy.
Why it fails
Six months later, the organization is no more capable than it was on day one.
How to avoid it
AI Infusion is about systems that learn and execute, not tools that generate drafts for humans.
What’s next
In the next article, I’ll talk about why local LLM deployments often disappoint, why AI-generated code comes with a heavy price, why layering AI on top of broken processes amplifies failure, how misplaced trust in single models creates risk, and where AI should augment, not replace, human interaction.
Read about “AI Infusion” services provided by New Standards Inc – www.newstandardsAI.com