Skip to main content
How to Fail with AI - Part 1
February 2, 2026 at 10:00 PM
by Andrey Odintsov
howtofailwithai.png

MIT recently published a study showing that 95% of AI initiatives fail to achieve ROI (State of AI in Business 2025, MIT NANDA).

The findings themselves were unsurprising. What stood out was how many technology leaders and executives had expected AI to act as a magic bullet, a shortcut to transformation rather than a discipline that still requires execution.

To bridge this disconnect, I am writing a short series of articles that connect three perspectives:

  • my own experience with AI implementations
  • patterns shared by my colleagues, mostly CTOs and COOs
  • how those real-world observations line up with MIT’s findings

First, let’s agree on what “failure” actually means:

In my experience, a successful AI initiative must meet all three of the following criteria. This definition aligns closely with how MIT evaluates success in its study:

  1. It reaches production, not just a pilot or proof of concept
  2. It pays for itself
  3. It delivers non-trivial, recurring annual benefits, either through cost reduction or incremental revenue

That’s it. Nothing more. Nothing less.

By this definition, 95% of AI initiatives fail, based on MIT’s analysis of roughly 300 enterprise programs.

Interestingly, MIT also reports that initiatives which do succeed generate, on average, $1.2M in annual benefit within 6 to 12 months of launch.

The prize is very real. The capture rate is not.

Let’s find out why.

Pattern #1: “Enterprise AI transformation”

Failure pattern

Organizations attempt to redesign and automate large portions of the enterprise in a single, sweeping AI-driven effort. The initiative is framed as a broad “AI transformation” that will modernize everything at once.

Why it fails

This approach fails because it misunderstands both AI capability and enterprise reality.

Most enterprise workflows are not clean, linear, or well-documented. They are the result of years of incremental changes, exceptions layered on top of exceptions, and human judgment filling gaps that no system ever addressed properly.

AI struggles in environments where:

  • decision logic is implicit rather than explicit
  • policies conflict or are inconsistently applied
  • exceptions are relatively common, rather than the edge case
  • decisions depend on undocumented tribal knowledge
  • inputs are unstructured, incomplete, or inconsistently labeled
  • outcomes are subjective or lack clear success criteria
  • rules change frequently based on context, customer, or escalation level

When organizations try to apply AI across all of this complexity at once, scope explodes, timelines stretch, and nothing reaches production. Teams spend months debating architecture, models, workflows, and governance while the business sees no tangible results.

How to avoid it

AI, at its current level of maturity, works best when applied to narrow, clearly defined workflow segments where rules are reasonably simple and stable.

To achieve results with AI, teams should:

  • identify a single workflow segment that is high-volume, high-cost, and rules-driven
  • define clear, measurable goals and success criteria
  • automate with AI the happy path (75-85% of cases)
  • escalate exceptions to humans
  • prove ROI in one constrained segment, then expand incrementally to other workflows
  • repeat iteratively

Pattern #2: Internal-only execution

Failure pattern

CTOs rely exclusively on internal IT and engineering teams to design, build, and deploy AI systems.

Why it fails

  • Internal teams deeply understand the specifics of their business and systems, but often lack real production experience with AI
  • They are forced to learn AI through trial and error in areas where experience matters most
  • Without having done this before, critical decisions around workflow selection, ROI prioritization, tool composition, system integration, and continuous AI learning are often discovered the hard way rather than designed deliberately

The result is prolonged experimentation without any production-ready outcome

How to avoid it

· Engage an experienced AI integration partner to lead and own the initiative end-to-end, from roadmap planning through production delivery

· Hold the engagement accountable to business outcomes and ROI, not technology adoption or platform rollout

· Treat AI Infusion as an execution program with clear owners, milestones, and economic targets, not an internal research or experimentation effort

MIT’s data shows that initiatives involving external partners are roughly twice as likely to succeed

Pattern #3: Treating AI as a buying decision

Failure pattern

Organizations select an AI product or platform and expect it to optimize enterprise workflows.

Why it fails

  • At today’s level of AI maturity, AI solutions are best at isolated tasks, not end-to-end workflows
  • No single off-the-shelf AI tool or LLM is perfect for every workflow in your business
  • Some workflows or pieces of workflows are not suitable for AI at all
  • Integration complexity of an AI product with legacy systems is underestimated

The AI product ends up sitting on top or on the side of the workflow rather than being deeply embedded and driving it.

How to avoid it

  • Treat AI Infusion as a professional services engagement, not a procurement exercise
  • Start with AI-centric workflow redesign and only then tool selection
  • Engage a competent AI Infusion partner
  • Orchestrate multiple AI solutions as needed, rather than forcing everything through one product

AI tools are not business solutions. They are components.

An expert AI Infusion team will do the following:

· identify specific workflows and domains within your business that are suitable for AI Infusion

· prioritize them by ROI and define a delivery roadmap

· propose and orchestrate appropriate AI tools, APIs and products (typically a lot more than one)

· integrate those to be seamlessly embedded into your systems and workflows

· implement processes that ensure that AI is constantly learning and improving

Pattern #4: Following hype instead of ROI

Failure pattern

AI investments concentrate on highly visible, client-facing use cases such as chatbots, personalization, and sales enablement.

Why it fails

  • These initiatives are easy to sell internally but hard to tie to durable ROI
  • Gains are often incremental and absorbed by existing cost structures
  • Promise of revenue upside is easier to pitch than cost removal, but cost removal delivers far greater ROI

As a result, 50-70% of these initiatives are attention-grabbers and money-losers, a pattern also clearly reflected in the MIT study.

How to avoid it

  • Prioritize Operations, Back Office, and BPO workflows where ROI is direct and measurable
  • Focus on cost per transaction, cycle time, error rate, and throughput, not on the “wow effect”
  • Use early operational wins to fund and justify expansion into other back-office areas

MIT’s findings and real-world experience both point to back-office AI Infusion as the most reliable source of returns.

Pattern #5: Mistaking ChatGPT usage for AI Infusion

Failure pattern

Organizations encourage employees to use ChatGPT or Copilot and consider this their AI strategy.

Why it fails

  • Prompt quality is inconsistent and uncontrolled. Poorly written prompts lead to results that are full of errors
  • AI lacks access to enterprise context and systems. Without context, AI may generate highly inaccurate and problematic results
  • Lack of feedback loop and self-improvement. ChatGPT generates drafts, and then humans edit them. These corrections are never fed back into AI, so the system never improves

Six months later, the organization is no more capable than it was on day one.

How to avoid it

  • Embed AI directly into operational workflows
  • Connect AI to enterprise systems, data, and decision context
  • Design feedback loops so AI gets better over time
  • Let AI handle all the common cases and limit human involvement to exceptions and escalation

AI Infusion is about systems that learn and execute, not tools that generate drafts for humans.

What’s next

In the next article, I’ll talk about why local LLM deployments often disappoint, why AI-generated code comes with a heavy price, why layering AI on top of broken processes amplifies failure, how misplaced trust in single models creates risk, and where AI should augment, not replace, human interaction.

How NOT to Fail with AI

Read about “AI Infusion” services provided by New Standards Inc – www.newstandardsAI.com