Sam Altman says the Singularity is Here, and it's Disappointingly Boring

Sam Altman says the Singularity is Here, and it's Disappointingly Boring

Sam Altman just published an essay on his personal blog where he declared that we're past the event horizon—the takeoff has started. But unlike the dramatic sci-fi scenarios we've imagined, the singularity is arriving gently, almost imperceptibly, as AI systems become smarter than humans in many ways while most of us still go about our daily lives largely unchanged. This paradox—revolutionary technology meeting evolutionary adoption—might be the most important story of our time.

There's something almost anticlimactic about Sam's essay. Here's the CEO of OpenAI, the company that sparked the current AI revolution, casually mentioning that we've passed the event horizon of the singularity. No fanfare. No dramatic warnings. Just a matter-of-fact observation in the opening sentence.

We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.

Robots are not yet walking the streets, nor are most of us talking to AI all day. People still die of disease, we still can’t easily go to space, and there is a lot about the universe we don’t understand.

And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.

This understated tone captures something profound about how technological revolutions actually unfold. Not with a bang, but with a ChatGPT prompt about dinner recipes.

The essay's central insight is this paradox: we're living through one of the most transformative moments in human history, yet daily life continues with remarkable normalcy.

If you've been paying attention, this tracks closely with what Sam has shared in the past. Earlier this year, in his essay "Three Observations", he noted how "Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024." But make no mistake, the progress in AI innovation has been extraordinary. Models like o3-pro, Veo 3, and Claude Opus 4 have made significant leaps in just two generations. And now, we're talking about systems that discover net new knowledge—not just recombine existing data.

And to be clear, this isn't just Sam's opinion, a similar sentiment about the progression of today's AI technology is now generally shared by leading researchers at all the frontier labs.

But perhaps even more intriguing than his musings on the tech itself, were the concrete numbers he provided around the cost of electricity for these models:

As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)

Sam had previously mentioned that AI costs fall "about 10x every 12 months" and that the cost of intelligence converges toward the cost of electricity. That's progress way beyond Moore's Law territory—that's something fundamentally different.

It's also a LOT cheaper than the prevailing narrative around the cost of running AI models today. And we continue to see the impact play out in real time. Just yesterday, OpenAI made o3-pro (their flagship model) available via API at 87% the cost of o1-pro when it was released just months ago. They also dropped the price of the base o3 model by 80%.

And the recursive and compounding nature of this progress is important:

We already hear from scientists that they are two or three times more productive than they were before AI. Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research. We may be able to discover new computing substrates, better algorithms, and who knows what else. If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.

From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. Of course this isn’t the same thing as an AI system completely autonomously updating its own code, but nevertheless this is a larval version of recursive self-improvement.

[...]

The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.

Looking forward, this sounds hard to wrap our heads around. But probably living through it will feel impressive but manageable. From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve. (Think back to 2020, and what it would have sounded like to have something close to AGI by 2025, versus what the last 5 years have actually been like.)

Think about it. A decade of research in a year, then a month, then a week. The math gets wild fast.

But here's what's really interesting: Altman thinks the hardest technical problems are behind us. The scientific insights that created GPT-4 and o3 will carry us much further. This suggests we're not betting on unknown breakthroughs anymore—we're in execution mode on a known path.

The self-reinforcing loops he describes paint a picture of inevitable acceleration. AI systems improving AI research. Economic value driving infrastructure buildout. Robots eventually building robots. Each loop feeds the others. Once this flywheel gets spinning fast enough, human timelines become irrelevant.

Altman's honesty about the challenges is refreshing. He doesn't handwave alignment or distribution problems. His two-step solution—solve alignment first, then democratize access—sounds simple but represents the defining challenge of our time. Social media feeds are his example of misaligned AI: incredibly good at understanding your preferences but optimizing for engagement over wellbeing.

The prescribed path forward is deceptively simple:

"1. Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term..."

"2. Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country."

Simple to state. Monumentally difficult to execute.

But Altman's confidence that "most of the path in front of us is now lit" suggests OpenAI believes they have line of sight to both goals.

Still, what makes this essay remarkable isn't its predictions—plenty of people predict dramatic AI progress. It's the tone of lived experience. When Altman writes about scientists being "two or three times more productive," he's not speculating. When he mentions the cost per ChatGPT query (0.34 watt-hours), he's sharing operational data. This isn't futurism; it's field notes from the frontier.

The "gentle" nature of this singularity might be its most dangerous aspect. Dramatic discontinuities can force adaptation. Gradual-then-sudden transformations can catch societies flat-footed. When Altman notes that "from a relativistic perspective, the singularity happens bit by bit," he's identifying both why it feels manageable and why it might not be.

His closing—"May we scale smoothly, exponentially and uneventfully through superintelligence"—reads like a prayer. Not for the technology to work (he's confident it will) but for humanity to navigate the transition wisely. Given the stakes, it's a prayer worth sharing.

The most telling line might be this: "Intelligence too cheap to meter is well within grasp." For those who understand the reference to nuclear power's broken promise of electricity "too cheap to meter," it's both ambitious and sobering. This time, Altman insists, the physics actually works. The question is whether our institutions, our wisdom, and our humanity can keep pace with our tools.

We're past the event horizon. The comfortable illusion that we can stop or significantly slow this process is gone. What remains is the work of shaping it—ensuring the gentle singularity remains gentle, that the abundance gets distributed, that human agency increases rather than diminishes.

As Altman learned during his brief firing and return, building the technology might be the easy part. Building the human systems to govern it wisely? That's where the real challenge lies.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe