- Noah Jacobs Blog
- Posts
- On Post Bubble Hope
On Post Bubble Hope
One possible positive outcome after the bubble pops
2026.03.15
XCLIII
[AI Doomer Porn, Post Bubble Hope, The Point of Future Histories]
Thesis: Ungrounded predictions of the future are easy to make; you might as well make it optimistic.
[AI Doomer Porn]
Citrini Research released a piece a few weeks ago, called “The 2028 Global Intelligence Crisis,” giving a description of a possible future in which by 2028, white collar labor has been nearly completed eradicated by AI and a huge swath of the US population slips into the permanent underclass.
They claim it's not Bear Porn or AI Doomer fan-fiction at the start, but that doesn't change the fact that it's both of those things. Although, I prefer the term AI Doomer Porn.*
My issue with this is that it's really, really hard to predict the future. And sure, they claim they're not predicting the future, and act like they’re exploring something under-explored scenario, but they’re really throwing gas on a panic fire without bringing any new facts or real analysis to the table.
The thing is, it's really not that hard to write a compelling narrative that sounds like anything is going to happen.

This was actually one of my biggest gripes when I was in finance: you have all of these people proselytizing these crazy predictions in eloquent sounding chains of events that completely reduce dimensionality of the situation, cherry pick the variables that help them, and ignore every variable that could serve as a counter point.
This piece by Citrini is more or less just that.
The good news, though, is that since it's so easy to write an intelligent sounding ‘scenario’ of the future, I decided I'd write one, too!
Mine's a bit more pessimistic about the short to mid term capabilities of AI, and more real about the current financial situation. But, hey, I’m more optimistic about the human spirit.
Of course, I think a future with nuclear powered spaceships and actually intelligent humanoid robots could be interesting and possibly good**, so all of those things casually make appearances in my prediction.
If I wanted, I could've plausibly included genetically modified horses that become unicorns, but I decided to keep it on the somewhat tame side.
Finally, I'd warn you that this isn't "Libertarian Science Maxxing Futurist Porn," but it basically is, so buckle up.
*If you are above the age of 30 and reading this, us young folk will sometimes put the word "porn" after non sexual things that are self gratifying. So "AI Doomer Porn" means content that is self gratifying in a (hopefully) non sexual way to "AI Doomers," or people who think AI will create a very bleak society rapidly or otherwise imminently end the world.
**In spite of the myriad mythological warnings of Golems, the grim terminal predictions of Asimov around robots, and Frank Herbert’s ban on the Thinking Machines, I do think that any useful societal taboo around humanoid automatons will be derived from experience rather than prophetic warning.
[Post Bubble Hope]
The Below is a future history - recounting the fictional events and trends between 2026 and the bottom of the AI Bubble in mid 2028. Note, I site appropriate sources when I refer to real events that happened before March 15th, 2026.
[Pop!; Post Listing Chaos; The Gang Learns About Efficient Markets; The Future of Work; Scientific Renaissance; A Bright Future]
[Pop!]
It's mid 2028. The analysts are saying we're on the way up from the bottom. It was a rocky year and a half, but consensus is finally coming around.
All the talk of AI this, AI that--it makes it a bit ironic that the thing that blew the bubble was the most notable AI company going public.
At the end of 2026, the Street went berserk when it saw the numbers on OpenAI's S-1, and not in a good way. Everyone knew the company was losing money… but it's one thing to hear about it, and another thing altogether to see an official report of a $30B loss in 2025 and another $35B burnt before 2026 was even over.
Anthropic was no better; the canary in the coal mine should've been when they revealed only $5B in total revenue along with $10B in training and inference costs alone in a court filing in March 2026. But, for some reason, losing at least $2 for every $1 made was overshadowed by the seemingly contradictory shouts that they had a $19B run rate sometime in February.
For whatever reason, it took a couple S-1s for the market to realize that the two companies it had been pinning it’s hopes on weren’t exactly being clear about their finances.
[Post Listing Chaos]
Still, it didn't deter either of the firms from completing the listings—as is so often the case, only death can extinguish hubris when it has such momentum. There was a big retail pump, at first. Then a pull back when the institutions didn't pile in. By the time the 6 month lock ups ended in mid 2027, the Anthropic and OpenAI were trading more like penny stocks than tech companies.
The capital in their bank accounts was drying up--revenue was still coming in, but not faster than expenses poured it out. The layoffs and price hikes helped, but not enough. Private investors wouldn't chip in; everyone who would've had already gotten burnt by private 12 figure valuations that quickly evaporated when the shares went public.
There was always talk of a government bail out, and now eyes were turned towards Washington. Relief came from a different direction, though: Microsoft tied the knot and gobbled OpenAI up off the open market. In a frenzy, Amazon stole Anthropic from Google, who everyone thought was the shoe in buyer.
By the time the acquisitions were made, it was already becoming apparent that the 'premium' being commanded by OpenAI and Anthropic for 'superior' models was at constant risk of erosion d by cool heads and mature products that treated LLM inference as a commodity. The acquisitions were closer to very expensive acquihires in an attempt for the lucky big tech companies that made the purchases to command some slight, 6 month edge on a product that was, for the first time, learning what an efficient market can do.
[The Gang Learns About Efficient Markets]
The old model of "throw money at the AI Lab and watch the model get asymmetrically better" was sliding into big tech saying "look at our AI CapEx, of course our models are better."
As the commodity pricing came to light, it turned out none of the inference providers could make more than a 25% premium on the capital and operational costs of owning and renting GPUs or even providing token inference. But it turns out, since every one of them had been burning piles of money to keep prices artificially low, even charging a 25% premium on cost represented at least a 2x increase on what the market had gotten used to spending on GPUs and APIs. Demand dropped as entire classes of nice to have "AI Wrapper" products that had already thin margins in 2026 became too expensive to justify buying in 2027.
This, of course, came along with quite aggressive consolidation in the space, with the most debt laden of the inference providers, like CoreWeave, getting acquired in face preserving "mergers of equals" or outright bought and gutted by the few PE and growth firms that still had enough dry powder to syndicate a purchase.
[The Future of Work]
In early 2027, as AI companies could no longer afford to burn piles of money to acquire customers, the productivity suite that was supposed to represent the future of work became unsustainable. The rumors of as much as $5000/mo costs to offer tools like Claude Code at $200 subscriptions came out as irrefutable fact. As a result, the cost of the subscription was going up, and the value was going down.
For corporate software teams, the calculus was looking a lot less like "Claude Code replaces a human worker for 1/100th of the cost." It was still undeniably valuable: a team with 5 juniors might be able to do more with 3 juniors + AI Coding Agents for less investment. But, it turned out that the bottlenecks were far less often individual developer productivity, and more often bloated organizations incapable of moving at lightning fast speeds, even if the machine could replace the man.
As for the other knowledge worker roles that were on their way to be certainly disrupted by AI? The public nature of the explosions of OpenAI and Anthropic, along with the dramatic increase in cost, conspired together to slow sticky enterprise adoption. To be sure, it was clear that over time, many firms with the budget to do so would be query all of their documents via a RAG database, and some would even have proactive "agents" flagging missed risks and recommending next steps. It turns out, these systems were a bit less "LLM" based than originally anticipated: traditional AI & NLP techniques were starting to be improved upon and used in conjunction with the LLMs to lower costs and improve accuracy, and to great effect. Regardless, it was now clear that the promised land of AI writing every single one of your emails and making all of your decisions for you was so much further away than anyone had thought.
On the other hand, those notorious teams at startups that outmaneuvered incumbents with superior, faster code? Well, they'd keep doing what they always did: ship faster & smarter and stay leaner, running circles around incumbents for specific use cases that continued to spiral out of some original, niche wedge, until they either were acquired or became the incumbent themselves.
Now, the gap was admittedly widening in their favor. It wasn't anything like ServiceNow losing half of it's revenue over 2 years... it was more like their market share getting slowly eroded by niche, vertical players that could more rapidly build enterprise grade softwares that replaced 3 or 4 vendors, charging less than all 4 of them combined, but more than any one of them on it's own.
[Scientific Renaissance]
All in all, the world realized that the models, as is, were good enough for most reasonable use cases--the engineering focus shifted away from some vague notion of "AGI" that was fueling 12 figure investments, and more towards efficiency. Techniques like QLora became commonplace, and even more aggressive optimization directions were being taken. Models with low latency requirements were being served on CPU.
And it seemed material science was on the way back in, too. The crunch on energy supply chains caused by the electricity demands of the mega data centers along with the added pressure of the War in Iran and the too little too late realization of the promise of Venezuelan Oil woke the West to the importance of Nuclear.
Even in 2025, reactors were coming back online, but this served more than anything to show how much inefficient red tape was strangling the industry. Partly out of perceived material necessity, partly in an attempt to boost the economy when it started taking a turn for the worst, the government eased the burden on nuclear investment and research. If there was ever any hope of an economical super intelligence, it seemed it might be contingent on harnessing the power of the sun in miniature on Earth. And, as it would turn out many years down the road, this same technology, when taken to it's natural conclusion, would finally give man the physical energy needed to make ours a truly space faring, interstellar species.
Speaking of material science, the humanoid robots that were coming into vogue in early 2026 had shipped to mixed reviews. The ancient American promise of a utopia with a faithful and loyal robot in every home was far from being realized by the end of the decade, but the experiment ultimately shed light on issues with our current conceptions of Artificial Intelligence. Namely, statefulness and learning rate: there was something unsettling about a robot 'forgetting' what you told it, or making the same errors repeatedly without learning. And, there was something decidedly unintelligent about the energy it required to make a large neural network adapt it's path ways to new information delivered on the fly.
Since the problems came to light in the context of humanoid robots, interesting and promising research directions around designing AI systems that were even more human like, aware of a slew of sensory inputs at all times concurrently, began to be explored. It was by no means an AI winter like had been experienced in the past; rather, if anything, it was a sober renaissance where new and novel methods that learned from and often included the beauty of the transformer were frequently explored, rather than being blotted out with religious zeal of by the bright sun of the LLM.
To be sure, these new methods meant to improve the humanoids were already showing promise to one day overcome the shortcomings of LLM use in all fields, not just robotics.
[A Bright Future]
Indeed, 2028 is a different time than ever before, and there are myriad problems unsolved.
But the so called great knowledge crisis never materialized in the way that was proposed, and the future looks bright in new ways. Yes, the market was particularly challenged with some sharp retractions through 2027, but the world is coming out of the stressful times with eyes on an unusually bright future. And the successful removal of a number of financial aberrations from the market without causing irreversible harm served to alleviated concern that the market has been damaged by crony capitalism beyond repair.
Now, it’s not uncommon for hope to dominate fear.
If you enjoyed my essay, it’d be cool if you subscribed. I’m here with my writing every Sunday.
[The Point of Future Histories]
While the future I just painted references specific facts and world events and takes them to one possible conclusion, it's also incredible naive and perhaps even arrogant.
The 2 hours I spent writing and editing that while downing black tea is completely disproportionate to the implications of the claims. I did literally 0 incremental research other than what was in my head. I made it end optimistically, even though I completely ignore any number of systematic issues, including the existence of the Federal Reserve. It would be disingenuous and maybe unethical for me to seriously act like this was a well thought out, likely scenario grounded in fact.
Still, I wrote it and published it for two key reasons:
To make it crystal clear that it's not very hard to make reasonable sounding predictions of the future. You take present day facts and trends and extrapolate them to some desired conclusion, either briefly addressing or outright ignoring any counter evidence.
To show that it's as easy to be optimistic as it is to be pessimistic. Sure, I threw in a recession, but I ended on what to me is a very hopeful note.
I sometimes wonder if one of the factors contributing to a tone of seemingly national pessimism is the lack of a vision worth fighting for that is plausible enough for people to buy into. Yes, there are reasons to have pessimistic future histories, dystopian futures to scare us into our senses, etc. But I don’t really know what the point of the Citrini one is, other than to get you to buy their research, maybe.
So, if I can counter balance it with one relatively optimistic and long term hopeful future history, even if it isn't so comprehensive to address every issue, it's a Sunday well spent.
Live Deeply,
