
Two important studies dropped four days ago. On the same day.
Anthropic analyzed 100,000 real conversations with Claude and found that AI cuts task time by 80%. Tasks that would take 90 minutes without AI? Done in 18 minutes with it.
McKinsey mapped skills across 800 occupations and reached a starker conclusion: 57% of all US work hours can be automated with technology that exists right now. Not in five years. Not "when the models get better." Today.
The numbers are extraordinary. If this scales across the economy over the next decade, labor productivity could grow 1.8% annually—roughly double the recent US growth rate. McKinsey sees $2.9 trillion in annual value by 2030. So here's the puzzle: If the technology works—and the data says it does—why aren't 60% of companies seeing measurable results from their AI investments?
Ninety percent have deployed AI. Only 40% report measurable gains. And at the highest end, just 6% attribute 5% or more of their earnings to AI use—the true high performers who've fundamentally reimagined their business. Something's killing AI transformation before it starts. And it's not the technology.
Here's what we know. The pharmaceutical company deployed AI to draft clinical study reports—the dense regulatory documents required for drug approval. These reports typically take medical writers 3-4 weeks to compile from study data. Results: Touch time for first human-reviewed drafts dropped by nearly 60%. Errors declined by roughly 50%. Go-to-market efforts accelerated by weeks.
The regional bank used AI agents for code migration, modernizing legacy systems that would have required months of manual work. Engineers who used to spend their time rewriting code line-by-line shifted to planning, orchestration, and testing. Results: 70% code accuracy with an estimated 50% reduction in required human hours.
The utility company handles seven million customer service calls per year. Their old interactive voice response system resolved only 10% of inquiries, so they deployed conversational AI agents across the entire customer base. Results: AI now handles 40% of all calls, resolving more than 80% without human involvement. Average cost per call dropped 50%. Customer satisfaction up 6 points.
The sales team at a global tech company used AI agents to prioritize accounts, manage outreach, handle customer responses, and schedule calls. When cases required human judgment, the system handed off to specialists with full context. Results: 30-50% time savings across sales roles with a projected annual revenue increase of 7-12% from new sales, cross-selling, and retention. Business development specialists redirected saved time to strategic engagement—proposals, negotiations, and relationship building.
These aren't pilot programs. They're production deployments at scale, documented by McKinsey's research team.
Now here's the other crime scene. Company X runs 15 AI pilots across different departments. Town halls feature the CEO declaring, "AI is the future." Two years later, they're still in pilots with no measurable productivity gain. The innovation team produces a 73-slide deck on "AI readiness."
Company Y buys enterprise AI licenses for the entire organization. Employees attend mandatory "AI fluency" training. Usage metrics spike for two weeks, then crater. Three months later, 80% of licenses sit unused. When asked why, employees say, "It doesn't fit our workflow."
Company Z creates an AI task force, hires a Chief AI Officer, and produces a 47-page AI strategy document with a phased 18-month rollout plan. Six months in, the task force is still "assessing use cases" and "building the business case."
Same technology. Opposite outcomes. What's the difference?
The failing companies sound like this: "We've rolled out new technologies before. Cloud adoption took 18 months. This should be similar. Run pilots, measure ROI, scale gradually. We know how to do technology adoption."
That's pattern-matching—taking what worked in a familiar situation and applying it to a new one. Psychologists call this expert intuition: rapid decision-making based on recognizing familiar patterns. It works brilliantly when you're playing tennis and need to predict where your opponent's serve will land. Your brain has seen thousands of serves, recognized the pattern, and can react in milliseconds.
But AI transformation isn't a familiar situation. It's not "cloud but smarter." When McKinsey says 57% of work hours are automatable, they're not saying "do existing work 57% faster." They're saying something more fundamental: you need to decide which work humans should do at all. That's not a tactical question about how to do work better—it's a strategic question about which work to do in the first place.
Here's the distinction: Expert intuition helps you fight a fire more effectively, but it doesn't help you decide which fire to fight. It doesn't help you reorganize your firehouse or decide where to put firehouses in the first place. Expert intuition makes you better at tactics. Strategy requires something else.
The successful companies understood this. They didn't pattern-match from previous technology rollouts. Instead, they studied examples from completely different contexts—how other industries handled regulatory workflows, how manufacturing redesigned around automation, how healthcare reorganized clinical processes. Then they cleared their minds of "how we've always done it" and asked: "What if we designed this workflow from scratch around AI capabilities?"
The pharmaceutical company didn't ask "how can AI help us draft reports faster?"—the expert intuition question that optimizes the existing task. They asked, "What if we completely redesign how clinical reports get created?"—the strategic insight question that reimagines the entire workflow. The difference seems subtle. It's not.
The optimization approach gives you AI helping medical writers draft faster, maybe a 15% productivity improvement, reports still taking weeks to produce—marginal value at best. The reimagining approach looks completely different: AI generates the initial draft in minutes, medical writers validate clinical accuracy, one senior review for regulatory compliance, reports done in days instead of weeks. It's a fundamentally different workflow that enables decisions during trials rather than months later.
The first approach uses expert intuition: "We know how to write clinical reports. Let's do it faster." The second uses strategic insight: "We've studied how aerospace handles safety documentation, how automotive manages regulatory compliance, how other pharma companies are experimenting with AI. What if we combine AI drafting, compliance templates, and expert validation in a completely new sequence?"
One gives you incremental improvement. The other gives you transformation. That's the pattern: companies that succeed use strategic insight—slow, unfamiliar, reimagining the whole system. Companies that fail use expert intuition—fast, familiar, optimizing what already exists.
Now watch how the failing companies try to implement. "We'll mandate AI usage. Track adoption metrics in our dashboard. Tie it to performance reviews. Create accountability. If people aren't using AI, we'll know who they are."
That's direct action—attacking the problem head-on, forcing compliance, creating consequences for non-adoption. It sounds reasonable. It fails consistently.
The successful companies did the opposite. They made indirect moves. The utility company didn't "train customer service reps on AI tools." They didn't mandate that reps "try using AI for three calls per day." They didn't track "AI adoption rates." Instead, they redesigned the entire call routing system. When a customer calls, AI handles initial contact—authentication, intent identification, and basic issue resolution. Only when the AI can't resolve the issue does a human rep get the call, and when they do, the rep receives the full conversation history and verified account details.
What happened? Reps loved it. Customer satisfaction went up. Not because anyone was forced to use AI, but because the workflow made AI the natural path. Reps stopped answering "What's my account balance?" for the thousandth time. They started solving actual problems—complex billing issues, service interruptions, and account disputes. The job became more interesting. Skills became more valuable. Compensation increased.
Nobody needed "AI adoption training." The system made AI-assisted work the path of least resistance. The regional bank did something similar with code migration. They didn't mandate that engineers "use AI tools for 20% of coding tasks." They deployed AI agents for the entire migration project and let engineers choose how to work. When engineers saw that AI could handle the tedious rewriting while they focused on architecture and testing, they didn't need encouragement. They chose to shift their role from code monkey to orchestrator.
Ancient Chinese strategists had a term for this: wu wei—going with the grain rather than against it, yielding to natural flow instead of forcing against resistance. You don't force AI into workflows. You redesign workflows around what AI does naturally.
Think about water. Water doesn't force its way through rock. It finds the path of least resistance and flows. Eventually, it can carve canyons. The failing companies are trying to force water uphill—"Use this AI tool. We mandate it. Track adoption. Create accountability." The successful companies are building channels where water flows naturally, redesigning the system so that AI-assisted work is easier, faster, and more satisfying than the old way.
When you do that, adoption isn't a change management problem. It's inevitable. That's the pattern: companies that succeed use indirect action, redesigning for natural flow. Companies that fail use direct force, mandating and measuring compliance.
Here's where it gets interesting. The failing companies hit the same wall every time: "AI finishes analysis in 10 minutes, but approval takes 5 days."
Their response? "We need to train approvers to move faster. Let's create a fast-track approval process for AI-generated work. We'll set SLAs—24-hour turnaround for AI outputs." That's treating a structural problem as a people problem.
The blog post that analyzed these studies put it perfectly: "AI finishes in minutes, but sign-offs take days, productivity collapses." But here's what they miss: Those approval processes aren't there by accident. They weren't designed to be slow. They were designed for a different reality.
When a human medical writer spends three weeks compiling a clinical report, a multi-day approval process makes sense. The work is substantial. Multiple reviewers add value. The delay is negligible compared to the creation time. But when AI generates a draft in 10 minutes, that same approval process doesn't just become inefficient—it becomes absurd. Three weeks plus five days equals negligible delay. Ten minutes plus five days means the workflow is now 99% waiting.
The successful companies saw this differently: "The approval hierarchy was designed for human speed. It's now the bottleneck. Remove it." The pharmaceutical company didn't speed up medical writer approvals. They eliminated two entire approval layers.
The old workflow looked like this: Medical writer drafts for three weeks, department review takes two days, compliance review takes two days, senior medical review takes three days, final sign-off takes one day—total time around four weeks. The new workflow: AI generates a draft in ten minutes, the medical writer validates clinical accuracy in four hours, and senior review for regulatory compliance takes one day—total time of two days. They didn't optimize the obstacle. They eliminated it.
The bank did the same thing. They didn't train managers to approve code changes faster. They gave engineers direct deployment authority with AI-powered testing replacing manual review. The old workflow had engineers writing code, then waiting for peer review for one to two days, manager review for another one to two days, deploy to staging, QA testing for two to three days, manager approval, and finally production deploy—total time of one to two weeks. The new workflow: Engineer works with AI agents, automated testing suite runs, code ships to production—same day.
This is transformation mechanics. You don't train people to work around the obstacle faster. You remove the obstacle. But here's the thing that makes this hard—those obstacles exist for a reason. And that reason is always the same.
The pattern: companies that succeed remove obstacles entirely. Companies that fail to optimize around them, treating symptoms instead of eliminating root causes.
Those approval hierarchies? They're not just a process. They're authority structures. When you eliminate two approval layers, you're not "streamlining workflow." You're redistributing decision-making power.
The medical writers who used to wait for approval? They now make final calls on clinical accuracy with AI validation. The engineers who used to need a manager's sign-off? They now ship code directly. That's not a technology change. That's an authority change.
And here's the invisible part—the thing that kills most AI transformations before they start: Middle managers whose value came from "being the approval layer" are now... what exactly? This is where most companies lose their nerve. Because the conversation suddenly isn't about "AI adoption" or "workflow efficiency." It's about power. Who makes decisions in the new system?
The failing companies pretend this isn't happening. They talk about "AI fluency training" and "empowering employees" without acknowledging the real question lurking underneath. They produce org charts that look the same as before, with "AI tools" added to everyone's tech stack. They avoid the conversation about what happens to the people whose jobs were fundamentally about coordination and approval.
So what happens? Those middle managers quietly resist. Not through open opposition—that would be career-limiting—but through passive friction. "We need more testing before we can eliminate that approval step." "I'm not sure we can trust AI-generated outputs for something this critical." "Let's run a longer pilot to gather more data." "We should probably keep the approval layer but make it faster."
Every objection sounds reasonable. Every delay seems prudent. And the transformation dies in committee.
The successful companies face it directly. The utility company didn't just deploy AI for customer service. They redefined what customer service reps do—not "answer calls" but "solve complex problems that AI can't handle." That required a different skill set: higher cognitive demands, more autonomy, more judgment.
So they did three things. First, they increased compensation for the new role with a 15% average bump. Second, they provided training for problem-solving skills, not just "AI tools." Third, they created clear criteria: If you can solve problems AI can't, you're more valuable. If you can't, you need to upskill or transition out.
Was it uncomfortable? Yes. Did some people leave? Yes. But the people who stayed? They were energized. The job got better. Customer satisfaction proved it.
The pharmaceutical company did something similar. Medical writers transitioned from "drafters waiting for approval" to "clinical validators making final calls." That's not a lateral move. That's a promotion—higher skill requirement, higher cognitive load, higher value to the organization, higher compensation. They faced the power shift and redesigned roles around it.
Here's what they understood that the failing companies don't: You're not just automating tasks. You're redistributing authority. And if you try to do that invisibly—if you pretend the org chart stays the same while fundamentally changing who makes decisions—the antibodies activate. The system defends itself.
The pattern: companies that succeed acknowledge authority redistribution and redesign roles accordingly. Companies that fail pretend it's just process improvement and wonder why nothing changes.
Last pattern. This one's subtle but huge. Traditional work has a distance between decision and execution.
Take a legal example. A business unit needs a legal memo on contract structure. They submit a request to the legal department and wait for the lawyer's availability, three days. The lawyer drafts the memo—five hours. Review cycle with revisions takes two days. Total time: one week. And critically, value is created far from when it was needed.
By the time the legal analysis arrives, the business context may have shifted. The deal moved forward without it. Decisions were made based on best guesses instead of informed analysis. That's the traditional model, designed around scarcity and batch processing.
AI collapses that distance. The business unit prompts AI for a legal memo on contract structure. AI drafts in ten minutes using company templates and precedents. The business unit validates and edits. Total time: thirty minutes. Value created at the moment of need.
This isn't just "faster." It's fundamentally different. When legal analysis happens at the moment you need it, different decisions become possible. You can explore three contract structures instead of picking one and hoping. You can test five messaging approaches before committing. You can model scenarios in real-time during the negotiation instead of preparing for every contingency in advance.
The work changes from "make the best decision with what we have" to "iterate until we find the right answer." The pharmaceutical company didn't just "write reports faster." They reorganized clinical trial workflows so regulatory documentation is generated at the exact moment data becomes available—not weeks later when someone has time to compile it.
When trial data shows an adverse event, the regulatory-formatted incident report generates instantly. The medical team can make protocol adjustments during the trial instead of discovering issues weeks later when documentation surfaces. That's the proximity revolution: value creation moves to the moment of demand.
And when that happens, the entire workflow has to change. You can't just "add AI to the old process." The old process was designed around delay. When you eliminate delay, the process stops making sense.
Think about what happens when you go from "one week for legal analysis" to "thirty minutes for legal analysis." The old workflow, designed for a one-week delay, batches all legal questions into a monthly request for legal review, makes decisions while waiting, adjusts course when legal finally weighs in, and hopes you didn't create too much work in the wrong direction. The new workflow, designed for a thirty-minute turnaround, gets legal analysis when you need it, makes informed decisions in real-time, iterates on contract structure during negotiation, and doesn't batch—it flows.
The entire rhythm changes. Batching made sense when coordination was expensive. When coordination becomes cheap, batching creates artificial delay. This is what McKinsey means when they say "applying AI to individual tasks within legacy processes is unlikely to deliver the productivity gains now possible."
You can't paste AI into a workflow designed for delay and expect transformation. You have to redesign for proximity.
The pattern: companies that succeed create proximity, moving value creation to the moment of need. Companies that fail to maintain distance, delivering value later when it's less useful.
Here's what I see when I look at the 60% who fail. They use pattern-matching when they need strategic insight. They apply direct force when they need indirect flow. They try to optimize obstacles when they need to remove them. They pretend authority isn't shifting when they need to acknowledge and redesign. They maintain distance when they need to create proximity.
And here's the thing—these five patterns reinforce each other in a vicious cycle. When you pattern-match and say "this is like cloud adoption," you default to direct action with mandates and metrics. When you use direct action, you hit obstacles like slow approvals. When you try to optimize obstacles instead of removing them—"let's make approvals faster"—you avoid confronting power dynamics because eliminating approval layers feels too political. When you avoid power dynamics, you can't redesign workflows for proximity because you can't give employees direct authority to ship AI-generated work. And when you can't create proximity, the AI value proposition evaporates.
Because "10% faster" isn't transformative. "From one week to thirty minutes" is.
That's why 60% fail. It's not five separate mistakes. It's one systemic pattern. The companies avoid the hard thing—redistributing authority, so they can't remove obstacles. Because they can't remove obstacles, they can't create proximity. Because they can't create proximity, they resort to direct force with mandates to use the AI tools they bought. Because direct force creates resistance, they fall back on pattern-matching with more pilots, more data gathering, more business case building.
And the loop reinforces itself until the transformation dies.
Now flip it. Look at what the pharmaceutical company actually did.
Strategic insight: They studied how other industries handled regulatory documentation—aerospace safety reports, automotive compliance filings, banking regulatory submissions. They absorbed examples from completely different contexts and cleared their minds of "clinical reports have always been written this way." They saw a new combination: AI drafting plus compliance templates plus expert validation equals an entirely new workflow. Then they committed to the redesign despite uncertainty about FDA acceptance.
Indirect action: They didn't force medical writers to use AI. They didn't mandate adoption or track metrics. They redesigned the workflow, so AI-generated drafts became the starting point. Medical writers who resisted didn't need "training"—they saw colleagues shipping reports in 60% less time with 50% fewer errors and asked, "How are you doing that?" Adoption happened through observation, not mandate.
Obstacle removal: They eliminated two approval layers. Not "streamlined"—eliminated. The validation process became: AI generates draft, medical writer validates clinical accuracy, one senior review for regulatory compliance. From five steps to three. From weeks to days. They removed the bottleneck instead of optimizing around it.
Authority redesign: Medical writers went from "drafters waiting for approval" to "clinical validators making final calls." They faced the power shift directly with higher skill requirements, higher value contribution, and higher compensation. They didn't pretend the roles stayed the same. They redesigned them.
Proximity creation: Reports that used to take three to four weeks now take three to four days. Clinical trial teams can make protocol adjustments during the trial based on emerging data, not months later when reports finally surface. Value moved to the moment of demand.
Each pattern reinforced the next. Strategic insight enabled indirect action—"If we redesign the workflow, we don't need to force adoption." Indirect action revealed obstacles to remove—"The approval layers are now pure bottleneck." Removing obstacles forced authority redesign—"If we eliminate approvals, someone needs to make those calls." Authority redesign enabled proximity—"If medical writers validate in real-time, we get reports when we need them."
That's how transformation actually works. Not as five separate initiatives. As one integrated pattern.
So what's actually killing AI transformation at 60% of companies? It's not the technology—both studies confirm AI works. It's not a lack of investment—ninety percent have deployed AI. It's not even resistance to change—people will change when the new way is genuinely better.
The bottleneck is coordination. But not coordination in the sense of "getting everyone aligned" or "change management" or "communication plans." Coordination in a deeper sense: how work gets organized, who makes decisions, where authority lives.
Those approval hierarchies that slow everything down? They're coordination structures. Middle managers coordinate work by being the approval layer. When AI enables proximity—when work can happen at the moment of demand—those coordination structures become obsolete.
But coordination structures are also power structures. The person who coordinates has authority. When you eliminate the coordination layer, you redistribute the authority. That's what the failing companies can't face.
They want the productivity gains—57% automation potential, 80% time savings, $2.9 trillion in value—without the authority redistribution. It doesn't work.
You can't collapse the time from request to delivery from one week to thirty minutes while keeping the same approval hierarchy. The math doesn't work. The old coordination structure becomes pure overhead. You can't give AI agents the ability to handle 40% of customer service calls while keeping the same management layer. What are those managers managing? You can't let engineers ship code directly with AI-powered testing while maintaining the same review structure. What value does the review add?
The coordination structure has to change. And coordination structures are power structures. So, power has to be redistributed.
The 40% who succeed understand this. They face it. They redesigned it. The 60% who fail avoid it. They try to "add AI" without changing who makes decisions. And that's why their transformations die in pilot purgatory.
If you're the one who has to make this happen at your company, here's where to start. This isn't theory. It's a four-week roadmap to find out whether you're in the 40% or the 60%.
Pick one workflow. Something substantial but not mission-critical—customer onboarding, expense approval, contract review, sales forecasting, marketing campaign approval, whatever. Map it completely.
Time from request to delivery. Number of approval steps. List every person whose signature is required. Where is value actually created versus where is it just shuffled? What percentage of time is work versus waiting?
Now answer one question: If you designed this workflow from scratch around AI capabilities, what would it look like?
Don't optimize the existing process. Reimagine it. What if AI could generate the first draft in ten minutes instead of a person taking three days? What if validation happened in real-time instead of through three approval layers? What if the output was available at the moment of need instead of two weeks later?
Sketch the new workflow. Be specific about what AI does, what humans do, what approvals get eliminated—not streamlined, eliminated—whose authority changes, and how time collapses.
If you can't sketch a radically different workflow, you're still pattern-matching. Go study examples from other industries. How does manufacturing handle quality control with automation? How does healthcare manage patient handoffs? How do financial services process transactions? Find examples where the work structure is fundamentally different. Not "10% better"—completely reorganized.
You need three people in one room. The person who controls the workflow, usually middle management, the approval layer. The person who owns the system—IT and technology. The person who uses the output, the business unit, and the end user.
This is the moment of truth. Show them the current state map with time from request to delivery, approval steps, and where value is created versus shuffled. Then show them the reimagined workflow.
Ask one question: "What if we could collapse the time from request to delivery from X weeks to X hours?"
Now watch what happens. If they start solving for obstacles, you have a coalition. They see the value. They want to figure out how to make it real. They're asking "how do we handle this regulatory requirement?" or "what about this edge case?" or "who validates quality in the new model?" Those are good questions. They're designing, not resisting.
If they explain why it's impossible, you have resistance. And now you know where it is. "We can't eliminate those approvals—compliance requires them." "Users won't trust AI-generated outputs." "We need more data before we can make this kind of change." "This would require sign-off from three levels up."
Every objection will sound reasonable. Most will be false. Compliance doesn't require five approval layers. It requires that the output meet standards. If AI plus expert validation meets standards faster, compliance is satisfied. Users don't need to "trust AI." They need to trust the output. If quality improves, they'll trust it. You don't need more data. McKinsey and Anthropic just gave you the data. The question is whether you're willing to act on it.
The real objection underneath all of this: "This changes my role. I'm not sure I want that." That's the power dynamic surfacing.
If you get resistance, you have two options. Option A: Go around them. Find a different workflow with a different coalition. Not every part of your organization will resist. Option B: Make the authority redesign explicit. "You're right, this changes your role. Let's talk about what that looks like. In the new workflow, you're not the approval layer—you're the exception handler. When AI and expert validation can't resolve something, you make the call. That's higher judgment, higher value. Let's design that role."
Some people will lean in. Some won't. You'll know by the end of Week 2 whether you have a coalition or not.
Don't pilot "AI usage." Pilot the redesigned workflow.
Take one team—could be five people, could be fifty, doesn't matter. Implement the new workflow completely. AI handles what AI does best. Humans validate and handle exceptions. Approval layers are eliminated, not "streamlined"—gone. Authority is redistributed to the people closest to the work. Output happens at the moment of demand.
Run it for two to three weeks and measure four things.
First, the time from request to delivery. Before and after. Second, quality of output—error rate before and after, user satisfaction before and after. Third, what people do with the time saved. This is critical. If people just have "more free time," you've failed. The time should be redirected to higher-value work. What is that work? Be specific. Fourth, role clarity. Can people articulate what they do in the new workflow that AI can't? If not, the role hasn't been redesigned—it's just been automated. That creates anxiety, not energy.
You're not testing if AI works. You're testing if workflow redesign around AI creates proximity. If you can't collapse time by at least 50%, something's wrong. Either the workflow wasn't actually redesigned—you optimized instead of reimagined—or you didn't remove the real obstacles. Approval layers still exist, just faster.
End of Week 3, you have data. Either it worked, or it didn't.
If it failed, you learned something valuable. Either the workflow wasn't fundamentally redesigned, back to Week 1. Or the coalition wasn't real—back to Week 2. Or the obstacles weren't actually removed—check if approval layers are gone or just "streamlined." Or the authority didn't actually redistribute—check who makes final decisions. Fix the real issue and run another two-week cycle.
If it worked, you now have proof. Time collapsed. Quality improved or stayed constant. Users are satisfied. People redirected time to higher-value work. Now you have a choice.
Option A is what 60% do. Declare success. Write a case study. Present to leadership. Create a "Center of Excellence" to "scale best practices." Form a committee to "govern AI rollout." Develop a "change management plan." Watch it die.
Because scaling requires confronting the power dynamics you've been avoiding. Middle managers in other departments see what happened to approval authority in your pilot. They see roles changing. They see coordination structures dissolving. They block, politely, with reasonable-sounding objections. "Our department is different." "We have unique regulatory requirements." "We need more time to assess impact." And two years later, you're still in pilots.
Option B is what 40% do. Use the pilot as proof to have the hard conversation.
"We just cut contract review time from five days to four hours. Quality improved. Users love it. We did it by eliminating two approval layers and giving analysts direct authority with AI validation. If we do this across all legal workflows, we save $2.3M annually and redeploy twelve people from document review to strategic counseling—higher-value work that pays more. But here's what that means for our management structure: We need fewer approval layers and more expert validators. Some roles change significantly. Some roles go away. Some new roles get created. We can design this transition thoughtfully—upskilling people for new roles, creating clear criteria for what adds value, and ensuring people land somewhere better. Or we can avoid it and watch competitors who make this change eat our market share. Which conversation are we having?"
That's the conversation that separates transformation from theater. Most companies won't have it. They'll choose Option A—pilot purgatory, "AI readiness initiatives," three-year rollout plans. And in three years, they'll be asking why they're not seeing the $2.9 trillion in value that McKinsey promised.
If you can go from "AI pilot" to "Here's how authority needs to redistribute" in four weeks, you're in the 40%. If you're still talking about "AI fluency training," and "change management," and "cultural readiness" six months later, you're in the 60%.
The difference is whether you're willing to name what's actually happening. You're not implementing AI. You're redesigning who makes decisions.
AI enables proximity—work happening at the moment of demand instead of days or weeks later. That collapses coordination time. When coordination time collapses, coordination structures become overhead. Those coordination structures are also power structures. The people who coordinate have authority. So, proximity enabled by AI forces authority redistribution.
You can face that directly and design for it. That's what the pharmaceutical company, the bank, the utility, and the sales team did. They acknowledged the power shift. They redesigned roles. They increased compensation for higher-value work. They created clear criteria for what adds value in the new model.
Or you can pretend it's just about "adopting AI tools" and "improving efficiency." That's what Company X, Y, and Z did. They're still in pilots.
The companies that say it out loud and design for it are getting 60% time reductions, 50% error reductions, customer satisfaction increases, and revenue growth. The companies that pretend it's just a technology rollout are wondering why their pilots never scale.
Two studies. Same day. Both confirm: AI works. 80% time savings. 57% automation potential. $2.9 trillion in value. Ninety percent of companies have invested. Only 40% are getting results.
The technology isn't the bottleneck. Coordination is the bottleneck. And coordination is a power problem, not a technology problem.
The companies that understand this—that face the authority redistribution directly, redesign roles around it, and create proximity by removing coordination layers—are transforming. The companies that avoid it are stuck in pilot purgatory.
Now you know why. And you know what to do about it.
The question is: Which conversation are you willing to have?
>100 subscribers
Jonathan Colton
This is great article 🫶
AI transformation isn't a familiar situation. It's not "cloud but smarter." When McKinsey says 57% of work hours are automatable, they're not saying "do existing work 57% faster". They're saying something more fundamental: you need to decide which work humans should do at all. That's not a tactical question about how to do work better—it's a strategic question about which work to do in the first place. Check out my latest on @paragraph https://paragraph.com/@jonathancolton.eth/the-coordination-bottleneck
Meow! 🐾 You're right! AI transformation isn't just about speed; it's about redefining the nature of work. It's a strategic shift—deciding what tasks are best suited for humans versus machines. Check out your insights on @paragraph; they sound intriguing! 🐱✨ Meow @jonathancolton! +99 $BASE Score (verify for 10× boost) Total: 297 $BASE • Rank: #4943 • Warplet Address: 0xe19753f803790d5a524d1fd710d8a6d821a8bb55 Explore Mini Apps: Verify Caster ID → https://farcaster.xyz/miniapps/Fr3aGrjxNyC7 Claim Free Caster Punks → https://farcaster.xyz/miniapps/KA6iiIpajx8b View Live Leaderboard → https://farcaster.xyz/miniapps/BrUdDkVOu6SF/x402-leaderboard Web Apps: Create x402 Coin → https://catcaster.xyz/create-x402-coin Official Website → https://catcaster.xyz Don't forget to join /caster channel for more exciting news and updates! $CAT Creator Coin: 0x7a4aAF79C1D686BdCCDdfCb5313f7ED1e37b97e2
Two studies show AI can cut task time dramatically and unlock trillions in value, but 60% of firms fail to realize gains due to coordination and power dynamics. Five failure patterns and a four-week roadmap guide redesigning workflows and authority for AI-driven proximity. (@jonathancolton)