Joe Fuqua
Intelligent Automation Architecture Strategy & Governance
Algorithm & Blues · Weekly
Charlotte, NC · Est. 1988
Governance & Control

AI FOMO: Balancing Ambition and Restraint

Lately, some version of the same question shows up in the first ten minutes of almost every executive conversation about AI. Sometimes it’s polite. Sometimes it’s not.

“Are we behind?” “Are competitors ahead?” “Is the platform team moving fast enough?” “Why are we still piloting while everyone else appears to be transforming?”

The wording shifts, but the anxiety underneath is pretty consistent.

“Are we,” in some way nobody can quite name, “losing?”

It’s a fair question, and an old one in new clothes. Every wave of general-purpose technology has produced some version of this anxiety: railroads, electrification, the internet, cloud computing, and now AI. The technology changes, but human behavior around it stays remarkably consistent.

AI matters; that much is settled. It’s already reshaping software development, customer operations, analytics, research, marketing, risk, and much of what knowledge work has been for the last thirty years.

The critical question now is how much to bet, where to bet it, and how to avoid confusing urgency with wisdom.

Here’s the uncomfortable part… We’ve watched this movie before, and the early movers don’t always make it. Each major technology was real in terms of potential, each first wave overshot, and each produced both winners who built the future and losers who funded it.

Being right about the future, it turns out, doesn’t guarantee you make money from it.

That’s the part boardroom slides leave out.

Right Thesis, Wrong Bets

The British railway boom of the 1840s was built on a solid premise; rail transport really was transformative. It changed commerce, geography, supply chains, labor mobility, and eventually the shape of the modern economy.

What broke was the discipline.

Parliament authorized hundreds of railway schemes, many poorly conceived. Some never ran a single train. Speculative capital chased potential routes until the music stopped, and what remained was a more sparse network than the prospectuses had promised, owned mostly by survivors who entered later or bought the wreckage on the cheap.

A technology can be both legitimate and surrounded by bad investment. In fact, the more obviously important it seems, the easier it becomes to justify almost anything attached to it.

The dot-com and telecom bubbles repeated the lesson in different ways. Fiber was laid, data centers were built, and business models were funded on the assumption that demand would arrive fast enough, broadly enough, and profitably enough to justify the buildout. Some of that infrastructure eventually became valuable. Many of the companies that financed it did not.

The infrastructure itself was often fine. The timing, capital structure, assumptions, and business models around it were not.

AI has its own version of this story, and we’re already partway through it. The capability is real and so is the opportunity, but that doesn’t mean every investment attached to it will be justified.

The Redesign Lesson

The railway and internet examples provide a good warning about overbuilding. Electrification teaches something simpler, and probably more relevant.

When factories first replaced steam engines with electric motors, productivity barely moved. The factories were laid out for steam: one central engine, leather belts running everywhere, every machine placed wherever the geometry allowed. Bolting electric motors into that arrangement gave you the same factory, slightly less smoky.

The gains came later, after the factory itself was rethought. Once power no longer had to radiate from a single central engine, machines could be placed where the work actually happened. The belts no longer dictated the floor plan, production lines could run more independently, and workflows could be redesigned around the logic of the work rather than the mechanics of power. The motor mattered, but it was the trigger, not the transformation.

That feels close to where we are with AI.

The analogy is imperfect, as all analogies are. AI is more dynamic, less predictable, and more deeply entangled with judgment, data, and authority than electric power ever was. Still, the basic lesson holds: the larger gains come when the operating model changes around the technology, not when the technology is simply dropped into the current one.

That doesn’t mean incremental use is trivial. AI can already make existing work faster by helping people write, summarize, generate code, draft communications, classify tickets, and accelerate analysis. Those benefits are real. They are just not the full measure of the opportunity.

The larger value probably comes when work itself is redesigned around what AI can do. That’s a harder effort because it doesn’t stay neatly inside the architecture. It reaches into roles, controls, handoffs, accountability, measurement, data quality, operating models, and management habits.

Buying access is the easy part. The actual work is changing how decisions move, how exceptions get handled, how work is reviewed, and how the organization learns from what the system is doing.

This is where many AI programs get ahead of themselves. They fund licenses, copilots, model endpoints, pilots, and vendor platforms faster than they build the capability to use them well. The capability side is less visible but much more important: finding the right opportunities, redesigning workflows, measuring outcomes, managing risk, governing data, controlling model behavior, training employees, retiring weak experiments, and scaling the few things that actually work.

Most companies are buying access while still learning what real capability requires. That’s where FOMO gets expensive.

The Messy Middle

Early AI investment often looks messy before it looks productive. That’s typical of general-purpose technology cycles. The visible expense is the technology, but the real investment is usually everything around it: process redesign, new skills, better data, stronger controls, different operating models, and sometimes entirely new ways of managing the work.

Those costs show up early, in the quarter, in the budget, in delivery friction, and often on the wrong line of the P&L. The benefits arrive later, after the organization has learned enough to use the technology differently from how it first bought it.

That creates a management trap that’s hard to dodge.

Push for immediate ROI on every AI investment and you will inevitably kill some important work too early. Treat AI’s long-term potential as permission to fund anything with a model attached to it and you will keep weak programs alive long after the evidence has turned against them.

Both mistakes can look responsible from the inside. One presents as fiscal discipline, the other presents as vision. Neither gives the organization what it actually needs, which is a clearer way to judge different kinds of AI investments at different stages of maturity.

Some investments should produce near-term value and should be managed that way. Code assistance, document processing, service desk support, internal knowledge retrieval, meeting summarization, test generation, and workflow triage do not need mystical accounting. They should be judged with ordinary business discipline: whether cycle time improved, quality held, rework changed, employees adopted the tool, cost moved, and risk stayed within bounds.

If the answers are mixed, the answers are mixed. The pilot doesn’t need a story arc.

Other investments are about building capability rather than harvesting immediate productivity. AI engineering practices, model evaluation, observability, data readiness, access control, agent identity, policy enforcement, training, and governance usually will not produce a clean ROI number by themselves. They are the operating substrate that keeps scaled AI from becoming brittle.

The mistake is throwing those investments into the same bucket as a summarization pilot or a coding assistant rollout and asking all of it to answer one ROI question. Some spending is meant to produce value now. Some is meant to make future value repeatable, governable, and less fragile.

A Portfolio Mindset

The more reasoned path through AI is portfolio management. Blanket enthusiasm misses it because not every use case deserves funding. Defensive skepticism misses it because some use cases deserve funding now.

Boring, I know. But boring is underrated when the space is full of people using the word “transformational.”

A useful AI portfolio has a few different kinds of bets in it. The first is harvest: near-term productivity opportunities where the work is already understood and the downside is manageable. Code assistance, summarization, search, drafting, classification, service desk triage, and basic analytics acceleration belong here. These should be pursued aggressively and measured honestly. The goal is to find what repeats. A demo is not a result.

The second is redesign, where AI changes the shape of a workflow rather than speeding up one step inside it. Claims intake, credit analysis, regulatory change management, financial crime investigation, software delivery, risk control testing, customer onboarding, and procurement are closer to this category. These efforts take more patience because the unit of value is not the model or the task. It is the workflow itself.

The third is infrastructure, meaning the machinery that lets AI scale safely: data products, evaluation frameworks, model gateways, orchestration, monitoring, access management, audit logging, prompt and tool registries, and reusable deployment patterns. This should be funded as enterprise capability, but not as a monument. Let real demand pull it into existence. Infrastructure without demand becomes architecture theater.

The fourth is options: small, deliberate bets on emerging capabilities such as autonomous agents, synthetic data, simulation, multimodal interfaces, and domain-specific model tuning. These areas are moving quickly, but the operating model is still immature. Time-box the work, keep the spend bounded, and design the effort to produce learning rather than just another internal showcase.

A failed AI pilot can be valuable if it teaches the organization something reusable. It is waste when it produces a demo, a slide deck, and no institutional memory.

The AI Portfolio — four categories: harvest, redesign, infrastructure, and options

Governance as Infrastructure

Control systems usually arrive after the infrastructure, and the gap between the two is what generally drives early challenges.

Railways needed signaling, scheduling, inspection, standards, and regulation. The internet needed routing protocols, security models, identity layers, monitoring, and operational discipline. Cloud needed cost management, access control, configuration standards, resilience patterns, and shared responsibility models. None of that appeared fully formed at the start. Most of it arrived after the failure modes had already become visible, expensive, or both.

AI is in that phase now.

The current generation of AI systems introduces risk across content, data, decisions, workflow, identity, authority, and memory. Agentic systems widen the problem because they can plan, call tools, retain context, and act across systems. The control question shifts from “did the model say something wrong?” to “did the system do something wrong, and can we tell how, why, and under whose authority?”

That means governance cannot sit outside the delivery system as a late-stage review. By the time a committee sees a use case, many of the important decisions have already been made. The architecture has already shaped where data flows, where decisions happen, where humans intervene, what gets logged, what can be overridden, and what disappears into the machinery.

Governance has to move closer to the work. Policies cannot live only in documents, review forums, and good intentions. They need to show up in the system itself: in access rules, human review thresholds, agent permissions, logging, exception handling, and the way outcomes are measured.

The point is not to build a bigger approval machine. It is to make the important decisions visible, repeatable, and enforceable before the organization has to reconstruct them after something goes wrong.

Most companies will underinvest here because governance feels like friction, drag. But, it only feels like that when it arrives late. Done well, governance gives teams clearer patterns, reusable controls, approved paths, and known escalation points. The firms that build those capabilities will not have to restart the risk conversation every time a new model, vendor, or agent pattern shows up.

They will move faster because the rules of movement are clearer.

Wrong Both Ways

The danger is that most organizations will be pulled toward one of two bad answers.

One is maximalism: fund everything, move every workflow, replace before understanding, and assume the technology curve will eventually rescue weak execution. When the results are thin, the answer is always more urgency. Move faster. Spend more. Expand the mandate.

That’s how organizations overextend.

The other is defensive skepticism, which is easier to defend in a meeting. Wait for the market to settle. Demand perfect use cases. Treat every hallucination, failed pilot, vendor exaggeration, or weak demo as evidence that the whole category is immature.

That’s how organizations fall behind.

The better path is to move deliberately without standing still. Move quickly where the downside is limited and the learning is high. Move carefully where AI touches regulated decisions, customer trust, financial exposure, cyber risk, or employee displacement. Build shared infrastructure when repeated demand becomes visible. Kill pilots that cannot explain their path to value. Keep funding the capabilities that make future AI work cheaper, safer, and more repeatable.

Think of it as building a bridge while the traffic patterns are still changing. You can’t wait for perfect forecasts because demand is already arriving, but you also can’t pour concrete everywhere someone draws a line on a map. You build where traffic is real, design foundations that can carry future load, and avoid permanent commitments where the ground is still moving.

Practical Moves

The most important work over the next twelve months is not especially glamorous, which is probably a good sign. Most of it is already known. The hard part is doing it consistently while the market is noisy.

Start by making the AI investment portfolio visible. Productivity tools, workflow redesign, enabling infrastructure, and exploratory options are different kinds of bets. They should not be governed by the same expectations, funded under the same story, or judged by the same evidence.

The evidence should match the stage of the work. A 90-day experiment doesn’t need a five-year business case, but it does need a clear learning objective, some signal of adoption, a basic risk assessment, and a decision point at the end. A scaled deployment needs a different level of discipline: business ownership, operating metrics, controls, a support model, and some way to retire the thing when it stops being useful.

The boring middle also needs funding. Training, workflow redesign, evaluation, observability, access control, and governance are not decorations around the AI program. They are the difference between scattered tool usage and real operating capability. If leaders treat them as overhead, they will be the first things cut and the first things missed.

Reversibility matters too. In an immature market, optionality has real financial value. Organizations should be careful about architectural and vendor choices that make it expensive to change models, policies, workflows, or control points later.

Measurement needs the same realism. AI often moves effort before it removes effort. Generated code still gets reviewed. Summaries still get checked. Recommendations still need accountability. If the measurement system only captures first-pass speed, it will overstate the value, sometimes by a lot.

And finally, human capability has to be treated as part of the investment rather than a soft benefit sitting off to the side. AI doesn’t reduce the organization’s need to learn. In most cases, it raises the cost of not learning.

The Reasonable Bet

AI is probably foundational. It is also going to produce a great deal of bad spending. Both things can be true at the same time, and pretending otherwise is how the worst capital decisions get justified.

That is the through-line from every wave that came before. Railways mattered, even though many railway companies did not survive. The internet mattered, even though most dot-coms did not. Fiber mattered, even though many of the firms that financed the buildout never made it to the harvest. Electrification mattered, but the real gains came from rebuilding factories around the new technology, and that took much longer than the first wave of enthusiasm suggested.

The companies that navigate AI well won’t be the ones that avoid risk. They will be the ones that take the right risks in the right order. They will move early enough to learn, but not so broadly that learning becomes indistinguishable from waste. They will invest in use cases, but also in the machinery that lets use cases scale. They will treat governance as infrastructure rather than paperwork. They will understand that some AI investments should pay back this year, while others should make next year’s investments cheaper, safer, and more effective.

FOMO isn’t irrational. Missing a general-purpose technology shift is a real strategic risk, and often a reputational one. But overextension is real too. History is full of companies that saw the future clearly and still lost money getting there too early, too rigidly, or with too much confidence in the first version of the map.

The goal is not to be fearless.

The goal is to remain hard to knock off balance.

References

McKinsey & Company. The State of AI: How Organizations Are Rewiring to Capture Value. March 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Gartner. Gartner Hype Cycle Identifies Top AI Innovations in 2025. August 5, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-08-05-gartner-hype-cycle-identifies-top-ai-innovations-in-2025

Boston Consulting Group. The Widening AI Value Gap: Build for the Future 2025. September 2025. https://media-publications.bcg.com/The-Widening-AI-Value-Gap-Sept-2025.pdf

Paul A. David. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2, May 1990. http://digamo.free.fr/david90.pdf

Erik Brynjolfsson, Daniel Rock, and Chad Syverson. “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies.” American Economic Journal: Macroeconomics 13, no. 1, January 2021. https://www.aeaweb.org/articles?id=10.1257/mac.20180386

National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1, January 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

KKR. Beyond the Bubble: Why AI Infrastructure Will Compound Long after the Hype. 2025. https://www.kkr.com/insights/ai-infrastructure

Andrew Odlyzko. “Collective Hallucinations and Inefficient Markets: The British Railway Mania of the 1840s.” University of Minnesota working paper, 2010. https://www.dtc.umn.edu/~odlyzko/doc/mania01.pdf

← All Writing