The failure of AI in most companies is rooted in common issues: pilots that may never scale, use cases that may never integrate into workflows, and spending that may never impact the P&L. Data already shows the gap. A study by IBM CEOs revealed that only 25% of AI projects achieved their expected ROI, and only 16% scaled across the enterprise. According to a UBS survey of IT leaders, only 17% of companies are at scale, with 59% citing a lack of clear ROI as the biggest barrier. Gartner warns that by 2025, 30% of GenAI projects will be abandoned after proof of concept due to poor data quality, weak risk controls, rising costs, and unclear business value.
It is time for CXOs to stop asking, “What can AI do?” and start asking, “What should leadership fix so AI can be effective?”
Table of Contents
Agentic AI Increases The Stakes
The following series of failures will become more severe as AI becomes increasingly autonomous. According to Reuters, UK banks are experimenting with agentic AI, which are systems that plan and execute actions, and regulators are emphasising accountability under current governance rules.
Gartner estimates that more than 40% of agentic AI projects will be abandoned by 2027, primarily because costs outweigh benefits and applications are misused. Failures shift from mere inconveniences to governance issues when AI moves from suggesting to actually performing tasks. This prompts the C-suite to see AI as a change in the operating model rather than just an IT experiment.
Why Business‑Led AI Fails
Most AI programmes claim to be business-led but often lack discipline. Teams act quickly on deploying tools, ignoring data readiness, underestimating integration challenges, and overestimating user adoption. Gartner’s data on abandonment highlights root causes such as data quality, risk controls, costs, and unclear definitions of value.
According to McKinsey’s “State of AI 2025, widespread adoption is happening, but scaled impact remains the key differentiator. High performers assign senior leaders to critical roles like AI governance and develop human validation for model outputs. ROI expectations are misaligned, even among C-suite executives: a Teneo survey cited by The Wall Street Journal found that 68% of CEOs plan to increase AI spending by 2026 despite modest returns, and only 50% of ongoing projects deliver returns exceeding their investments.
The pattern of failure is predictable: first excitement, then discipline, and finally the organisation reaches the purgatory of pilots.
How the CEOs Will Save AI: Ownership, Not Sponsorship.
The CEO’s role is to eliminate disjointed AI projects. Essentially, rescue starts with creating a concise set of must-win bets based on strategy and cutting down the rest. McKinsey notes that value is captured when senior leaders own and commit to AI initiatives, especially when they redesign workflows rather than merely adding AI to existing processes. In the coming years, CEOs will treat AI as a transformation effort: assigning responsibility, demanding measurable adoption results (not just superficial use), and making tradeoffs in budgets and teams to enable scaling of AI programmes.
How CFOs Will Rescue AI: From Demos to Decision Economics
If AI is not functioning in business, the stabiliser involves consulting the CFO. The lack of ROI in cases like IBM and the scaling gaps are clear in the CEO’s analysis. CFO-led rescue efforts will resemble strict value gates: define what ‘return’ means in each context, set short-term demonstrations of value, and separate productivity claims from measurable financial outcomes. Require CFOs to promote unit economics of automation; leaders must justify the costs of running (computing, vendors, oversight), changing (process redesign, training), and managing risk (compliance, remediation). This approach turns AI spending into an investment rather than an experiment.
How COOs Will Rescue AI: Workflow Redesign and Reliability
Most AI efforts are disappointing because they invest in flawed processes. COOs will work hardest: streamlining processes to enable AI to perform them and reorganising jobs, controls, and handoffs to support it. McKinsey includes workflow redesign as a step that enhances the bottom-line impact of GenAI. COO rescue also involves reliability engineering: checking, handling exceptions, and establishing escalation paths so front-line teams trust the system. In short, operational maturity, not just another model, will be the key differentiator.
The Components of “AI Safety Net”
The foundational elements Gartner highlights, such as data quality and risk controls, will be of interest to CIOs and CDOs since weak infrastructure hinders scaling. The adoption debt will be tackled by CHROs through employee reskilling, role definition, and aligning performance incentives with the responsible use of AI rather than shortcuts. Autonomy will be formalised in the roles of risk leaders (CRO/GC); regulators have already suggested that accountability within governing frameworks should align with an agentic system.
The Rescue Playbook Will Lose Its Interest, and that is the Idea
The only way to save AI is by adopting leadership practices that seem almost outdated: fewer projects, assigning owners, cleaner data, redesigning workflows, measuring adoption, and maintaining strict discipline on ROI. The winning companies won’t be the ones that talk the most about being AI-first. They will be the ones whose CXOs make AI a repeatable management system, so the second round of AI not only launches but also endures.