Two studies dropped four days ago. Same day. November 25th, 2025.
Anthropic analyzed 100,000 real conversations with Claude and found that AI cuts task time by 80%. Tasks that would take 90 minutes without AI? Done in 18 minutes with it. McKinsey mapped skills across 800 occupations and reached a starker conclusion: 57% of all US work hours can be automated with technology that exists right now. Not in five years. Not "when the models get better." Today.
The numbers are extraordinary. If this scales across the economy over the next decade, labor productivity could grow 1.8% annually—roughly double the recent US growth rate. McKinsey sees $2.9 trillion in annual value by 2030.
So here's the puzzle: If the technology works—and the data says it does—why aren't 60% of companies seeing measurable results from their AI investments?
Ninety percent have deployed AI. Only 40% report measurable gains. And at the highest end, just 6% attribute 5% or more of their earnings to AI use—the true high performers who've fundamentally reimagined their business.
Something's killing AI transformation before it starts. And it's not the technology.
When you look at companies that succeed versus those that fail at AI transformation, you see five patterns that keep repeating. Not academic theories—actual patterns visible in the data.
The failing companies' pattern-match from previous technology rollouts: "Cloud adoption took 18 months. This should be similar. Run pilots, measure ROI, scale gradually."
That's expert intuition—applying familiar patterns to new situations. It works for tactical decisions but fails for strategic ones.
When McKinsey says 57% of work hours are automatable, they're not suggesting to do existing work 57% faster. They're emphasizing something more fundamental: you need to decide what work humans should do at all.
The successful companies understood this. They didn't ask "how can AI help us draft reports faster?" They asked, "What if we completely redesign how reports get created?"
The pharmaceutical company that deployed AI for clinical study reports didn't optimize the existing workflow. They reimagined it entirely: AI generates the initial draft in minutes, medical writers validate clinical accuracy, one senior review for regulatory compliance. Reports that took 3-4 weeks now take days. Time dropped 60%, errors dropped 50%.
That's not incremental improvement. That's transformation.
The pattern: Companies that succeed use strategic insight to reimagine entire workflows. Companies that fail use expert intuition to optimize existing ones.
The failing companies use direct force: mandate AI usage, track adoption metrics, tie it to performance reviews, create accountability.
It sounds reasonable. It fails consistently.
Successful companies redesigned systems, and AI has become the path of least resistance. The utility company didn't train customer service reps on AI tools. They redesigned the entire call routing system. When a customer calls, AI handles initial contact—authentication, intent identification, and basic issue resolution. Only when AI can't resolve the issue does a human rep get the call, with full conversation history and verified account details.
What happened? Reps loved it. Customer satisfaction went up. Not because anyone was forced to use AI, but because the workflow made AI the natural path. Reps stopped answering "What's my account balance?" for the thousandth time. They started solving actual problems—complex billing issues, service interruptions, and account disputes.
The job became more interesting. Skills became more valuable. Compensation increased.
The pattern: Companies that succeed redesign systems for natural flow. Companies that fail to mandate adoption and track compliance.
The failing companies hit the same wall: "AI finishes analysis in 10 minutes, but approval takes 5 days."
Their response? "Train approvers to move faster. Create fast-track approval processes. Set 24-hour SLAs."
They're treating a structural problem as a people problem.
When a human medical writer spends three weeks compiling a clinical report, a multi-day approval process makes sense. When AI generates a draft in 10 minutes, that same approval process doesn't just become inefficient—it becomes absurd.
3 weeks + 5 days = negligible delay
10 minutes + 5 days = workflow is 99% waiting
The successful companies didn't speed up approvals. They eliminated approval layers.
The pharmaceutical company went from five approval steps to three. The regional bank gave engineers direct deployment authority with AI-powered testing replacing manual review. Code that used to take 1-2 weeks to deploy now ships the same day.
The pattern: Companies that succeed remove obstacles entirely. Companies that fail to optimize around them.
Here's where most transformations die.
Those approval hierarchies aren't just a process. They're authority structures.
When you eliminate two approval layers, you're not "streamlining workflow." You're redistributing decision-making power. The medical writers who used to wait for approval? They now make final calls on clinical accuracy. The engineers who used to need a manager's sign-off? They now ship code directly.
That's not a technology change. That's an authority change.
And here's the invisible part—middle managers whose value came from "being the approval layer" are now... what exactly?
The failing companies pretend this isn't happening. They talk about "AI fluency training" and "empowering employees" without acknowledging the real question: Who makes decisions in the new system?
So what happens? Those middle managers quietly resist. Not through open opposition—that would be career-limiting—but through passive friction.
"We need more testing before we can eliminate that approval step."
"I'm not sure we can trust AI-generated outputs for something this critical."
"Let's run a longer pilot to gather more data."
Every objection sounds reasonable. Every delay seems prudent. And the transformation dies in committee.
The successful companies face it directly. The utility company redefined what customer service reps do—not "answer calls" but "solve complex problems that AI can't handle." That required higher skills, more autonomy, and more judgment. So they increased compensation 15%, provided problem-solving training, and created clear criteria: If you can solve problems AI can't, you're more valuable. If you can't, you need to upskill or transition out.
The pharmaceutical company's medical writers transitioned from "drafters waiting for approval" to "clinical validators making final calls." That's not a lateral move. That's a promotion—higher skill requirement, higher value, higher compensation.
The pattern: Companies that succeed acknowledge authority redistribution and redesign roles. Companies that fail pretend it's just process improvement.
Traditional work has a distance between decision and execution.
A business unit needs a legal memo on contract structure. They submit a request, wait for lawyer availability (3 days), the lawyer drafts (5 hours), review cycle with revisions (2 days). Total: 1 week. By the time the legal analysis arrives, the business context may have shifted. Decisions were made based on best guesses instead of informed analysis.
AI collapses that distance. The business unit prompts AI for a legal memo. AI drafts in 10 minutes using company templates. The business unit validates and edits. Total: 30 minutes. Value created at the moment of need.
This isn't just "faster." It's fundamentally different. When legal analysis happens at the moment you need it, different decisions become possible. You can explore three contract structures instead of picking one and hoping. You can iterate in real-time during negotiation instead of preparing for every contingency in advance.
The work changes from "make the best decision with what we have" to "iterate until we find the right answer."
The pharmaceutical company reorganized clinical trial workflows so regulatory documentation is generated at the exact moment data becomes available—not weeks later when someone has time to compile it. When trial data shows an adverse event, the regulatory-formatted incident report generates instantly. The medical team can make protocol adjustments during the trial instead of discovering issues weeks later.
The pattern: Companies that succeed create proximity—value at the moment of need. Companies that fail to maintain distance—value delivered later.
Here's what I see when I look at the 60% who fail:
They use pattern-matching when they need strategic insight. They apply direct force when they need to redesign systems. They try to optimize obstacles when they need to remove them. They pretend authority isn't shifting when they need to acknowledge and redesign. They maintain distance when they need to create proximity.
And here's the thing—these five patterns reinforce each other.
When you pattern-match ("this is like cloud adoption"), you default to direct action (mandate usage, track metrics). When you use direct action, you hit obstacles (slow approvals). When you try to optimize obstacles instead of removing them, you avoid confronting power dynamics (can't eliminate approval layers—too political). When you avoid power dynamics, you can't redesign workflows for proximity (can't give employees direct authority). And when you can't create proximity, the AI value proposition evaporates.
Because "10% faster" isn't transformative. "From 1 week to 30 minutes" is.
The companies avoid the hard thing—redistributing authority—, so they can't remove obstacles. Because they can't remove obstacles, they can't create proximity. Because they can't create proximity, they resort to direct force. Because direct force creates resistance, they fall back on pattern-matching.
And the loop reinforces itself until the transformation dies.
So what's actually killing AI transformation at 60% of companies?
It's not the technology—both studies confirm AI works. It's not a lack of investment—90% have deployed AI. It's not even resistance to change—people will change when the new way is genuinely better.
The bottleneck is coordination. But not coordination in the sense of "getting everyone aligned" or "change management." Coordination in a deeper sense: how work gets organized, who makes decisions, where authority lives.
Those approval hierarchies that slow everything down? They're coordination structures. Middle managers coordinate work by being the approval layer. When AI enables proximity—when work can happen at the moment of demand—those coordination structures become obsolete.
But coordination structures are also power structures. The person who coordinates has authority. When you eliminate the coordination layer, you redistribute the authority.
That's what the failing companies can't face.
They want the productivity gains—57% automation potential, 80% time savings, and $2.9 trillion in value—without the need for authority redistribution.
It doesn't work.
You can't collapse the time from request to delivery from 1 week to 30 minutes while keeping the same approval hierarchy. The math doesn't work. The old coordination structure becomes pure overhead.
The coordination structure has to change. And coordination structures are power structures. So, power has to be redistributed.
The 40% who succeed understand this. They face it. They redesigned it.
The 60% who fail avoid it. They try to "add AI" without changing who makes decisions.
And that's why their transformations die in pilot purgatory.
Two studies. Same day. Both confirm: AI works. 80% time savings. 57% automation potential. $2.9 trillion in value.
Ninety percent of companies have invested. Only 40% are getting results. Only 6% are getting real value.
The technology isn't the bottleneck. Coordination is the bottleneck. And coordination is a power problem, not a technology problem.
The companies that understand this—that face the authority redistribution directly, redesign roles around it, and create proximity by removing coordination layers—are transforming.
The companies that avoid it are stuck in a state of pilot purgatory.
Now you know why.
The question is: Which conversation are you willing to have?
Anthropic
Tamkin, Alex, and Tyna Eloundou McCrory. "Estimating AI Productivity Gains from Claude Conversations." Anthropic, November 25, 2025. https://www.anthropic.com/research/estimating-productivity-gains.
Analyzed 100,000 Claude.ai conversations
Found 80% average time savings (tasks taking 90 minutes → 18 minutes with AI)
Projected 1.8% annual labor productivity growth potential
McKinsey Global Institute
Yee, Lareina, et al. "Agents, Robots, and Us: Skill Partnerships in the Age of AI." McKinsey Global Institute, November 25, 2025. https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai.
Mapped skills across 800 occupations
Concluded 57% of US work hours automatable with current technology
Estimated $2.9 trillion in annual value by 2030
Case studies: pharmaceutical (clinical reports), banking (code migration), utilities (customer service), sales (account management)
Lukes, Steven. Power: A Radical View. 3rd ed., Palgrave Macmillan, 2021.
Three-dimensional model of power
Applied to coordination structures as power structures
Kotter, John P. Leading Change. Harvard Business Review Press, 1996.
Eight-step transformation model
Applied to AI transformation obstacles
Duggan, William. Strategic Intuition: The Creative Spark in Human Achievement. Columbia Business Press, 2007.
Expert intuition vs. strategic intuition
Applied to pattern-matching vs. workflow reimagination
Krippendorff, Kaihan. The Art of the Advantage: 36 Strategies to Seize the Competitive Edge. Evolve Publishing, 2011.
Wu wei (indirect action), proximity theory
Applied to system redesign and value creation timing
Research Period: November 25-29, 2025
Methodology: Synthesis of primary studies, strategic frameworks, case study analysis, and pattern recognition across organizational transformation examples.



