February 20, 2025
|
15 min read

The ROI Reckoning: Why 2026 Demands Proof

The grace period is ending.

[[divider]]

For three years, organizations have invested in AI on faith. The logic was simple: AI is transformative, competitors are moving, and the cost of missing the wave exceeds the cost of experimentation. Boards approved budgets. Executives launched initiatives. The question was not whether to invest, but how fast.

That era is closing.

S&P Global data shows that in 2025, 42 percent of companies abandoned most of their AI initiatives. That is up from 17 percent in 2024. The abandonment rate more than doubled in a single year. The reasons cited are consistent: unclear value, escalating costs, and inability to demonstrate returns.

The 2025 narrative was invest in AI or fall behind. The 2026 narrative will be prove it works or shut it down. Organizations without rigorous frameworks for measuring AI value will face budget cuts, project cancellations, and strategic reversals. The reckoning is coming.

[[divider]]

The Accountability Gap

Here is the core problem: 49 percent of organizations struggle to estimate and demonstrate the value of their AI projects.

This is not a minor measurement challenge. It is an existential issue. Nearly half of organizations investing in AI cannot articulate what they are getting for their money. They have initiatives, pilots, deployments. They do not have proof.

The inability to demonstrate value is considered more important than other challenges like talent shortages, technical issues, data problems, and overall trust in AI. You can hire around talent gaps. You can fix technical issues. You can clean data. But if you cannot show ROI, the investment stops.

According to Deloitte's analysis, while 64 percent of organizations report use-case-level benefits from AI, only 39 percent report enterprise EBIT impact. Put differently: most organizations can point to individual AI applications that seem to work. Far fewer can trace those applications to bottom-line results. The connection between AI activity and business outcomes remains murky.

This is the accountability gap. Activity without outcomes. Investment without returns. Motion without progress.

[[divider]]

The Pilot Trap

The statistics on pilot failure are brutal.

The average organization scraps 46 percent of AI proofs of concept before they reach production. RAND research indicates that 80 to 90 percent of AI projects never leave the pilot phase. These are not failed experiments in the scientific sense. They are investments that never yielded value.

Why do pilots fail at such high rates?

Some fail for technical reasons. The model does not perform well enough on real data. The integration is harder than expected. The latency is too high for the use case. These are legitimate failures that provide learning.

But many pilots fail for reasons that have nothing to do with technology. They fail because success criteria were never defined. They fail because the business case was assumed rather than validated. They fail because no one owned the transition from proof-of-concept to production. They fail because the organization declared victory at demo and never did the work required for deployment.

The pilot trap is this: organizations keep launching pilots because pilots are easy. Pilots require limited investment, limited commitment, limited accountability. You can always launch another pilot. Production is hard. Production requires integration, training, change management, ongoing support. Production requires ownership.

The result is pilot purgatory. Organizations with dozens of successful pilots and zero production deployments. Activity that looks like progress but is actually stagnation.

[[divider]]

The Timeline Problem

Traditional ROI frameworks assume investments generate returns on predictable timelines. AI does not work this way.

Deloitte's survey found that organizations need at least 12 months on average to resolve adoption challenges and start realizing major value from generative AI. Twelve months is a long time in corporate planning cycles. Executives who approved investments in early 2024 are now being asked to justify continued spending without clear returns.

The timeline mismatch creates organizational dysfunction. Sponsors who championed AI initiatives face pressure to show results before results are possible. Teams rush to claim premature victories. Metrics are selected for what they can show rather than what matters. The organization optimizes for internal politics rather than actual value creation.

McKinsey research points to a typical pattern: 31 percent of leaders anticipate measuring ROI within six months, but most recognize that productivity and operational efficiency rather than immediate profitability are the realistic returns in early stages. The expectation gap between what stakeholders want and what AI can deliver creates friction that undermines adoption.

When leadership expects six-month payback and delivery teams know twelve months is realistic, the result is disappointment, blame, and project cancellation. The AI was not the problem. The timeline expectations were.

[[divider]]

The Measurement Problem

Even organizations that want to measure AI ROI often do not know how.

The Larridin State of Enterprise AI 2025 report identifies three critical components of effective AI ROI measurement: usage analytics tracking who uses AI tools, how frequently, and for which tasks; outcome metrics connecting usage to business results; and comparative analysis measuring the performance delta between AI-enabled and traditional workflows.

Most organizations have none of these capabilities.

Without usage data, you cannot calculate ROI because you do not know the inputs. Without outcome metrics, you cannot connect activity to results. Without comparative analysis, you cannot isolate AI's contribution from other factors. The measurement infrastructure does not exist.

Organizations that implement all three components report 5.2x higher confidence in their AI investments and 3.8x higher continued investment rates. The organizations that can measure are the organizations that continue investing. The organizations that cannot measure eventually stop.

The measurement gap is also a governance gap. Sixty-seven percent of enterprises admit they do not have complete visibility into which AI tools their employees are using. Shadow AI has proliferated across organizations, with employees adopting tools that IT never sanctioned and finance never budgeted. You cannot measure ROI on investments you do not know exist.

[[divider]]

The Hidden Costs

AI ROI calculations often fail because they ignore real costs.

Data preparation and platform upgrades typically consume 60 to 80 percent of any AI project timeline and budget, yet most business cases completely ignore this reality. Organizations budget for the AI model but forget about the data engineering required to make the model useful. The result is cost overruns and delayed timelines that tank ROI calculations.

Change management and training often account for 20 to 30 percent of total costs. AI systems that users do not adopt generate zero value regardless of their technical performance. The training investment is not optional. Yet business cases routinely omit it.

Ongoing maintenance and optimization require continuous investment. Models drift. Data distributions change. New edge cases emerge. The system that worked at launch may degrade over months without attention. The ROI calculation that assumed set-and-forget economics will prove wrong.

Compliance and security requirements add costs that vary by industry and jurisdiction. Healthcare and financial services organizations face regulatory constraints that consumer applications do not. The ROI math for a compliant deployment differs substantially from the ROI math for an unconstrained one.

When organizations calculate ROI using only the visible costs and ignore the hidden ones, they systematically overestimate returns. The projects look attractive on paper. They disappoint in practice.

[[divider]]

The Attribution Problem

Even when AI works, attributing value to it is hard.

Consider a sales team that adopts an AI assistant for prospect research and email drafting. Revenue increases by 15 percent in the following quarter. How much of that increase came from the AI? How much came from a favorable market? How much came from a new sales manager hired the same quarter? How much came from a competitor's stumble?

Attribution in complex systems is notoriously difficult. AI initiatives do not happen in isolation. They happen alongside other changes, in environments affected by external factors, with outcomes influenced by human behavior. Isolating AI's contribution requires experimental designs that most organizations do not implement.

Some organizations respond by claiming all improvement as AI value. This works until skeptics ask hard questions. Others respond by claiming nothing, erring toward conservatism. This works until sponsors ask why they should keep funding something with no demonstrated value. Neither approach is sustainable.

The sophisticated response is impact chaining: mapping each AI intervention to its downstream business effects and creating pre-AI expectations against which post-AI results can be compared. This requires baseline measurement before deployment, clear hypotheses about how AI will create value, and rigorous tracking of outcomes over time. Most organizations do not do this work.

[[divider]]

The Soft ROI Challenge

Some of AI's most important benefits resist quantification.

Better decision-making is valuable but hard to measure. How do you put a number on decisions that were slightly better informed, slightly faster, slightly more consistent? The value is real. It does not fit in a spreadsheet.

Employee satisfaction and retention linked to AI initiatives matter for long-term organizational health. If AI tools reduce tedium and increase engagement, that creates value. But the value shows up in reduced turnover costs and productivity gains that accumulate over years, not quarters.

Improved customer satisfaction creates lifetime value that extends far beyond the measurement period. AI-driven personalization may reduce churn by 5 percent, but that 5 percent compounds over years of customer relationships. Quarterly ROI calculations understate the impact.

Strategic optionality is perhaps the hardest to value. An organization that builds AI capabilities today has options tomorrow that a laggard does not. The value of those options is real but speculative. How do you measure the value of something you might do in the future?

Organizations that evaluate AI only on hard ROI systematically undervalue it. Organizations that rely only on soft ROI cannot justify continued investment. The solution is frameworks that capture both dimensions, but those frameworks are rare.

[[divider]]

The Scrutiny Intensifies

CFOs and boards are asking harder questions.

The era of FOMO-driven investment is ending. Some business leaders jumped on the AI bandwagon in short-term impulse moves to stay ahead of competitors. Others envisioned AI as the solution to every problem. Both approaches are now facing skepticism.

Finance teams want to see the same rigor applied to AI investments as to any other capital allocation. What is the expected return? What is the payback period? What are the risks? What happens if the project fails? AI initiatives that cannot answer these questions will struggle to secure continued funding.

Boards are reading the headlines about failed initiatives and asking whether their organization is different. They want evidence, not assurances. They want data, not narratives. The executives who championed AI now need to defend their choices with numbers.

The scrutiny is appropriate. Tens of billions of dollars have flowed into enterprise AI. That level of investment deserves accountability. The organizations that can demonstrate value will continue investing. The organizations that cannot will face pressure to redirect resources elsewhere.

[[divider]]

The Path Forward

What separates organizations that can demonstrate AI ROI from those that cannot?

The answer is process, not technology.

Start with clear objectives. What specific business outcome is this AI initiative supposed to produce? Revenue increase? Cost reduction? Cycle time improvement? Customer satisfaction gain? If you cannot articulate the objective, you cannot measure progress toward it.

Establish baselines before deployment. What is current performance on the metrics that matter? You cannot claim AI improved something if you do not know where you started. Baseline measurement is not optional. It is the foundation of any credible ROI analysis.

Design for measurement. Build tracking into the system from day one. Instrument usage. Capture outcomes. Create the data infrastructure required to analyze results. If measurement is an afterthought, it will not happen.

Set realistic timelines. AI initiatives typically require 12 or more months to demonstrate substantial value. Set stakeholder expectations accordingly. Communicate what early indicators of success look like, and distinguish them from full value realization.

Implement comparative analysis. Where possible, run controlled experiments. Compare AI-enabled processes to traditional ones. Compare users with AI access to users without. Isolate AI's contribution from other factors.

Document continuously. Create a paper trail of decisions, implementations, and results. When leadership asks what the AI investment accomplished, have the evidence ready. The organizations that can answer the question will keep their budgets. The organizations that cannot will lose them.

[[divider]]

The Compounding Stakes

The ROI reckoning is not just about individual projects. It is about organizational trajectory.

Organizations that build robust measurement frameworks gain confidence in their AI investments. Confidence enables continued investment. Continued investment enables capability accumulation. Capability accumulation enables more ambitious applications. The flywheel spins.

Deloitte's analysis shows that strategy innovation leaders who excel at measuring AI value outperform laggards across multiple dimensions: higher tech investment ROI, higher returns on equity, and greater realization of enterprise value. They attribute over 40 percent of their enterprise value to digital initiatives and recognize more than 40 percent in latent potential for future growth.

Organizations without measurement frameworks cannot distinguish successful investments from failed ones. They cannot learn from experience because they do not know what happened. They make decisions based on anecdote rather than evidence. Their AI strategy is random walk, not directed progress.

The gap compounds over time. Organizations that measure and learn improve each quarter. Organizations that do not remain stuck. By 2027, the gap between leaders and laggards will be unclosable because it will represent years of accumulated learning versus years of accumulated confusion.

[[divider]]

What This Means

The grace period is ending. The accountability demands are rising. The organizations that survive the ROI reckoning will be those that took measurement seriously from the start.

This is not about creating dashboards that make executives feel good. It is about building the institutional capability to understand what AI investments actually accomplish. It is about honest assessment of what works, what does not, and why.

The ROI reckoning will be painful for organizations that invested without accountability. They will face difficult conversations with boards and investors. They will cancel initiatives that never should have been approved. They will write off investments that never should have been made.

For organizations that built measurement infrastructure, the ROI reckoning will be an opportunity. They will have the evidence to justify continued investment. They will know which initiatives deserve more resources and which should be wound down. They will navigate the scrutiny with confidence because they have the answers.

The choice is not whether to face the reckoning. The reckoning is coming regardless. The choice is whether to face it with evidence or excuses.

[[divider]]

RLTX builds systems with measurable outcomes.

We define success criteria before we start.

We establish baselines, instrument for measurement, and document results. When your board asks what the AI investment accomplished, we provide the evidence.

That accountability is not overhead. It is the foundation of every mission we run.

More Insights
More Insights
Integration
Why the Next War Will Be Won in Simulation
December 18, 2025
|
15 min read
Execution
Why the Next War Will Be Won in Simulation
December 18, 2025
|
15 min read
Strategy
Why the Next War Will Be Won in Simulation
December 18, 2025
|
15 min read