FEB 09, 2026
Why Half of All Automation Projects Fail (And How to Be in the Other Half)
Summary: 42% of companies abandoned most of their AI initiatives in 2025-up from just 17% in 2024 [1]. The problem isn't the technology. Research shows that successful projects follow a clear pattern: they map processes before touching technology, get cross-functional agreement on the current state, and fix the connection points between systems, workflows, and customer experience first.
When you automate a broken process, you don’t fix the problem. You just move it faster.
There's a stat that should worry anyone about to spend money on AI or automation: 42% of companies abandoned most of their AI initiatives in 2025-up from just 17% in 2024 [1].
Not "underperformed" or "took longer than expected." Abandoned completely.
MIT's 2025 research found that 95% of generative AI pilots fail to deliver measurable impact [2]. RAND Corporation confirms that over 80% of AI projects fail - twice the failure rate of regular IT projects. And this isn't improving. It's getting worse [3].
Surprisingly, the problem is rarely the technology. So what is causing the failure?
Four Reasons Projects Fail (And How to Fix Them)
Working with companies in automotive and B2B services, I can see AI automation being mentioned as a solution more often. There are enthusiasts and sceptics.
As a service designer, I turned to existing research in this field to understand more about this solution. Here I am sharing some findings:
1. No one agrees on how the process actually works
My observation is that different teams describe the same process differently because they experience different parts of it. Sales says the bottleneck is the product. Onboarding says it's because Sales enters incomplete data. IT is working around a system limitation that no one documented.
Everyone's right, and everyone's wrong at the same time.
The research: 82% of IT decision-makers say miscommunication between teams leads to the wrong thing being built [4].
What fixes it: Proper end-to-end process mapping that shows where work actually moves across boundaries, not where the org chart says it should move. This requires someone who can see the full system: how customer-facing teams interact with operational workflows, and how both connect (or don't) to IT systems.
When IT improves system assuming customer support follows certain steps, but Operations has been doing it differently for months, the process breaks. The fix isn't better technology. It's getting everyone in the room to look at the service from perspectives other than their own.
2. You can't quantify the waste, so you can't justify the spend
"This feels inefficient", or "we will save X amount of time" doesn't unlock the budget. Leadership needs numbers: how much time, how much cost, and when the invested money brings in more revenue. Without that breakdown, you're guessing at ROI.
The research: McKinsey's 2025 survey found that AI high performers are nearly three times as likely to have fundamentally redesigned individual workflows before selecting technology [5]. Meanwhile, 66% of companies struggle to establish ROI metrics for AI initiatives [6].
What fixes it: A cost, risk, and effort breakdown that shows where money and time concentrate across the entire workflow. Understanding how those workflows impact customer experience - acquisition and retention, to see the impact on the revenue.
This requires understanding not just what IT sees in the system logs, but what Operations experiences in daily work, and what customers encounter at the front end. The waste is usually in the gaps between these worlds.
3. The process has too many exceptions and variants
You automate the "happy path," then discover that 40% of cases don't fit it. Now you're maintaining the automation and the manual workarounds.
The research: Mapping complex processes remains a challenge for 54% of organisations implementing automation, and integration issues with legacy systems affect 39% of companies [7].
MetaSource framed it this way:
"RPA software implemented without a careful assessment of your processes is like hiring a bigger team of new employees without knowing why the last team failed. You'll be making the same errors, only faster." [8]
What fixes it: Mapping the exceptions before you build, and understanding why they exist. Sometimes the exception exists because Customer Success communicates directly with Operations, but Product and IT don't know about that channel. Sometimes it's because legacy systems force workarounds that three people know about but no one has documented.
The fix requires cross-functional discovery that surfaces these patterns and decides: simplify the process, optimise, or automate the 80%.
4. Teams are solving problems in silos
Each function manages its own forecast, its own assumptions, its own version of "the process." Product and IT talk. Customer Success and Operations talk. But there's a communication gap between those pairs, so Product doesn't know what changes Operations is implementing, and IT doesn't know how those changes will impact the systems they're building.
The research: 77% of organisations say the time it takes to design and agree on process changes is a bottleneck. 62% say business users and IT cannot easily collaborate on projects [4].
Research on RPA implementations shows that large organisations inadvertently run "islands of automation" - siloed teams designing and deploying automations independently, causing inefficiency and a damaging cycle of duplicating mistakes [9].
What fixes it: Cross-functional alignment from the start. Not a handoff meeting after IT has already built something, or after Operations has already changed the workflow.
This means creating shared visibility across customer experience, operational workflows, and IT systems - then keeping those three worlds connected throughout design and implementation. Cross-functional teams eliminate conflicting priorities and knowledge gaps, keeping everyone focused on successful delivery rather than departmental interests.
The Pattern That Separates Success from Failure
When we look at the research, a clear pattern emerges. The projects that fail aren't failing because of bad technology. They're failing because:
- Teams can't see the full system (82% build the wrong thing due to miscommunication),
- The process has too many exceptions and variants (54% due to complexity, 39% due to legacy systems),
- Functions work in silos (77% struggle to align on process changes),
- The business case is built on assumptions, not data (66% can't establish ROI metrics).
The projects that succeed do three things differently:
- They map the process before touching technology - not a theoretical flowchart, but a real map of how work moves today, including exceptions and handoffs.
- They get cross-functional agreement on the current state - if Sales, Ops, Finance, and IT don't agree on what's happening now, they won't agree on what should happen next.
- They understand where systems, workflows, and customer experience intersect - and they fix those connection points first.
This isn't work you can delegate to IT alone. It's not work you can hand to Operations alone. And it's definitely not work a vendor can do without understanding your specific environment.
It requires someone who works at the intersection of customer experience, operational workflows, and IT systems, and can translate between all three.
Sources & References
- S&P Global Market Intelligence (2025): AI Adoption and Outcomes Report
- MIT State of AI in Business (2025): The GenAI Divide
- RAND Corporation: Why AI Projects Fail
- Camunda: 2025 State of Process Orchestration & Automation Report
- McKinsey: The State of AI 2025
- Fullview.io industry research, 2024-2025
- Salesforce Survey on Business Process Automation (October 2024)
- MetaSource: Why RPA Projects Fail 2019
- Blueprint Systems: RPA Center of Excellence
If this kind of thinking resonates with you, subscribe to get more insights like this straight to your inbox.