Why "Wait and See" is the Most Expensive Strategy in the AI Era
- 2 days ago
- 6 min read
Updated: 14 hours ago
By Luke Maslow
I started my career in tech when there used to be a comfortable middle ground between “First Mover” and “Fast Follower.” You could let somebody else take the bruises, learn from their mistakes, then move in with a safer, cheaper version of the same idea.
With AI, that middle ground is disappearing.
The gap between early adopters and everyone else is no longer a manageable distance. It’s turning into an chasm where every quarter of delay makes the climb steeper.
Therefore, the “right time” to integrate AI into your core business is now. Taking action today means you prevent the explosive gap in talent and data debt that becomes harder (and eventually impossible) to pay down.
As the chart below illustrates, we are witnessing The Divergence, a phenomenon that breaks the traditional rules of technology cycles.

The gap between early adopters and late adopters is no longer a linear distance; it is an accelerating, exponential chasm. If you are waiting for the "perfect time" to integrate AI into your core business processes, you aren't just falling behind. You are likely incurring a "talent and data debt" that may eventually become too costly to overcome.
The Early Adopter Engine: Data Gravity, Muscle Memory, and Refined Models
The top curve of our chart represents the early adopters. Their advantage doesn’t come from having shinier tools; it comes from how they’ve reorganized their businesses around those tools.
Data Gravity: Over a decade ago, Dave McCrory coined “Data Gravity” to describe how data behaves like mass: as it grows in one place, it pulls in more applications, services, and even more data. Early adopters started building serious data foundations years ago. Today that foundation behaves like a flywheel: better models create better experiences, better experiences drive more usage, and more usage produces richer data that makes the next generation of models even stronger. Sound familiar? It should. It's the exact way in which AI is enabling rapid progress, regardless of industry. The more AI knows, the more it can assist.
Operational Muscle Memory: You cannot buy organizational learning off the shelf. McKinsey’s 2025 State of AI survey found that only a small fraction of companies are getting significant value from AI and those “high performers” are nearly three times as likely as others to say they have fundamentally redesigned individual workflows around AI. They are not just sprinkling AI on top of existing processes; they are rebuilding how work actually gets done.
That doesn’t happen in a single project. It comes from years of failing in low‑risk areas, tightening data governance, experimenting with new roles, and teaching people how to collaborate with AI instead of working around it. That accumulated “muscle memory” is an intangible asset you can’t swipe a credit card to acquire.
Refined Models: A late adopter can subscribe to a generic large language model today. An early adopter has already spent the last 18 to 24 months fine‑tuning models on proprietary, cleaner datasets and wiring them into their domain‑specific workflows. Combined with their data gravity and operational learning, they’re not using the same model; they’re running an entirely different playbook.
The Late Adopter Penalty: A Debt That Compounds
On the bottom curve of the chart is the late adopter. They may feel like they’re “holding steady,” but they’re actually regressing.
Every quarter they delay, they add to their talent debt. McKinsey’s 2025 workplace report shows that AI‑specific skills gaps are now the single most cited barrier to adoption, with 46% of leaders naming talent as the primary drag on progress. At the same time, a Harvard Business School analysis of AI’s talent shift warns that the most attractive roles and experiences are clustering in organizations that are already experimenting aggressively with AI.
In other words, the people you want, the curious operators, the builders, the pragmatic innovators, are pulled toward the companies where the real AI work is happening. You can still hire, but you’re fishing from a shallower talent pool, and you’ll pay more to lure people away from the environments where they’re learning the fastest.
There’s also a structural dependency problem. A 2026 Harvard Business Review article points out that many senior leaders have rolled out AI tools and pilots but have not redesigned workflows, roles, or governance so AI is actually embedded in everyday decision‑making. A companion HBR/Slalom study underscores the disconnect: 68% of leaders and employees say they can keep pace with AI, yet 93% report that underdeveloped skills and inadequate training are holding them back.That is the late‑adopter pattern in a nutshell: heavy investment in vendors and licenses, light investment in internal capability. You end up with a stack of “black box” tools, limited ability to shape or extend them, and a roadmap that moves at your vendor’s pace instead of your own.
The "Catch-Up" Trap
The middle of the chart is the most dangerous place: the zone where you’ve decided you “need to catch up,” but the cost of catching up keeps rising.
With older technology waves, moving from paper to Excel or from on‑prem servers to the cloud, the cost of catching up was painful but fairly predictable. You could budget for licenses, projects, and training, and close the gap over a few cycles.
AI doesn’t behave that way. Here’s why:
High performers are using AI to redesign workflows, not just automate tasks. McKinsey’s 2025 survey shows that half of AI high performers intend to use AI to transform their businesses, and they are 2.8–3x more likely than others to report fundamental workflow redesign. While you are still scoping “Phase 1,” they are already reinvesting gains from Phase 1 into Phases 2, 3, and 4.
At the top end of the market, the barrier to entry is rising fast. Technical analyses of frontier models estimate that the amortized cost of training leading systems has been growing roughly 2 to 3x per year since 2016, putting models like GPT‑4 and Gemini Ultra in the tens to hundreds of millions of dollars to train, with billion‑dollar runs projected before the end of the decade. You do not need to train a frontier model yourself but the fact that a small group of players can afford to do so tells you how quickly the leaders are pulling away. The Stanford Human-Centered AI (HAI) 2024 Index Report confirms the growing barrier to entry that many leader are not yet seeing.
Put simply: the longer you wait, the more you have to do just to get back to the starting line all while the front of the pack is already running the race. There is a point at which “catching up” becomes a story you tell yourself rather than a realistic, strategic plan.
What to Do Instead of Waiting
If I had to share any single point of advice it would be this: do not to panic and do not try to launch a moonshot. Just move forward one step at a time.
If you want to stay on the upper curve over the next decade, here are moves you can make now that don’t require a seven plus figure budget:
Clean and connect your data. You don’t need a perfect data lake, but you do need to know where your critical data lives, who owns it, and how AI systems will access it. This is the fuel for every meaningful use case, and it’s where early adopters quietly built their edge.
Pick a few real workflows, not “toy” use cases. High performers aren’t just rolling out chatbots; they are redesigning underwriting, claims, customer onboarding, planning, and other core processes with AI embedded in the middle. Start with one or two workflows where you can clearly measure cycle time, error rate, or revenue impact.
Invest in people, not just tools. The HBR/Slalom research is blunt: tools are arriving faster than skills and training, and that gap is already limiting outcomes. Build structured enablement into your roadmap: training, sandbox environments, communities of practice, and clear expectations for how roles will evolve as AI becomes part of daily work. Have you heard the term 'prompt engineering' but aren't sure what it means? Find resources to help you target the skills your need to build, borrow, or buy. You may be surprised to hear this but unlike in a typical Copilot implementation, prompts are hundreds (and more) of lines long. It takes expertise to know how to do this.
Design to own your playbook. Use vendors where it makes sense, but be deliberate about what you want to own and how you’ll avoid being boxed in by a single provider’s roadmap. HBR argues that leaders need to assess data, architecture, and governance first, then decide what to build versus buy based on long‑term value, not just short‑term speed. While I'd agree, I must refer back to point 3 above which is people - they are equally as important to where you start.
None of these moves will win the race in a week, a month, or a quarter. What they do is change your trajectory: they keep you on the upper curve where each experiment, each trained team, and each cleaned dataset makes the next move easier instead of harder.
If you read this and thought, “This makes sense, but I don’t know what the next step beyond giving access to Chat GPT or Copilot (pick your LLM) for my team,” - you are not alone. Your next step is not to copy someone else’s AI roadmap; it’s to design one that fits your data, your workflows, your team's capabilities, and your risk tolerance.



