Lately, conversations about AI have begun to change tone. Will this AI bubble burst or will it sustAIn?
After the initial euphoria, awe, amazement, and fear of missing out, reality is starting to set in. Enterprises are realizing that while large language models are powerful, they are also expensive. Training costs are high, but inference costs are often the real surprise. Every prompt, every response, and every token carries a compute cost. GPU time, energy consumption, latency, and cloud bills scale linearly with usage. These costs accumulate quietly in the background.
As a result, a growing chorus of experts is predicting that the AI bubble is about to burst.
The volume of articles and analyst reports warning of an imminent collapse continues to rise. Many analysts appear to be betting on AI’s failure. This may sound dramatic, but it also sounds familiar.
We have seen this pattern before, not once or twice, but countless times.
For thousands of years, humanity has greeted every meaningful technological shift with the same mix of fascination, anxiety, and firm conviction that this time, the bubble will burst. Yet that conclusion feels premature.
Expensive today does not mean economically unviable tomorrow. History does not support the idea that cost shock reliably signals a bubble burst. Technology bubbles tend to collapse when there is no real demand or when the value created is illusory. AI satisfies neither condition.
Demand for AI is not speculative. It is tangible and already embedded in analytics, productivity, compliance, software development, and more. The value may be uneven and occasionally overstated, but it is undeniably real.
What we are witnessing is not a collapse in demand or value. It is reality catching up with expectations. Every major technology goes through a phase where excitement outpaces engineering. AI is firmly in that phase today.
Those who remember the early days of cloud computing will recognize this pattern. Cloud was initially marketed as cheaper than on-premises infrastructure. Security and reliability concerns followed. Then enterprises realized their cloud bills were higher than expected. Soon after, commentators declared that cloud economics were fundamentally broken.
“Cloud computing is a trap.” – Richard Stallman
What followed was not abandonment, but refinement.
Hybrid architectures emerged. Edge computing addressed latency and cost. FinOps introduced discipline and governance. Cloud did not burst. It settled into its role as core infrastructure.
We have seen this cycle many times. In every successful case, the technology did not disappear; it matured.
Electricity shocked society before it rewired the world.
“The use of alternating current is a foolish fad.” – Thomas Edison
The internet did not collapse as predicted. It quietly permeated everything.
“The internet will catastrophically collapse in 1996.” – Robert Metcalfe, 1995
Mobile data did not overwhelm networks. It made work portable.
“There is no reason anyone would want a computer in their home.” – Ken Olsen, 1997
Each of these technologies faced similar anxieties around cost, security, and unintended consequences. AI is following the same trajectory.
Yes, large language models are expensive today. But cost shock is not a bubble-burst signal. It is an early-adoption signal.
The industry has already begun making corrections. Smaller language models are emerging as cost-effective solutions for focused use cases. They are efficient and capable, but they are not replacements for large models. They are complements. Engineering will catch up with excitement. And as history suggests, it will eventually surpass it.
Legal challenges around training data will also find resolution. Data will be licensed, regulated, and governed more carefully. AI will become more structured and responsible, not extinct.
We no longer need to be Isaac Aryabhata, Newton, Albert Einstein to access centuries of accumulated knowledge. A few well-crafted prompts can surface insights built on thousands of years of human thought. The playing field is being permanently leveled.
For a species that has long romanticized effort, struggle, and persistence as prerequisites for insight, this shift will be deeply disruptive. As much as we like to think we love disruption, humans crave the boring. Predictability comforts us. AI is still in its infancy, so feeling anxious about it is only human.
I remain optimistic about AI’s future and its impact on humanity. We are experts at using the difficulties to thrive. We have done this since the Pleistocene Glaciation or the end of the last ice age.
But one practical concern persists. If AI multiplies productivity, and it will, what happens to jobs? More importantly, if large segments of society become unemployed or underemployed, who will purchase the efficiently mass-produced goods AI helps create?
Who knows.
Perhaps AI itself will help answer that question.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” – Roy Amara