
Sam Altman didn’t mince words at Snowflake Summit: “Just do it.” His advice to enterprise leaders navigating the fast-evolving AI landscape in 2025 was less about chasing the next big model and more about moving—now. In a fireside chat with Snowflake CEO Sridhar Ramaswamy and Conviction’s Sarah Guo, the conversation was part therapy session, part rallying cry for companies still circling the AI waters. The window for sitting back and observing, they warned, is already closing.
Key Points
- Waiting for AI to “settle” is a losing bet—fast iterators are already winning.
- Agents are evolving from interns to full-on collaborators across workflows.
- More compute isn’t a theoretical lever—businesses should start using it tactically now.
This wasn’t your usual enterprise keynote fluff. When OpenAI’s Sam Altman and Snowflake’s Sridhar Ramaswamy sat down for a chat at Snowflake Summit, they got unusually real about what it takes to survive—let alone thrive—in a post-GPT world.
Altman’s blunt advice? Stop hesitating. “The companies that have the quickest iteration speed and make the cost of making mistakes low—those are the ones that win.” Ramaswamy agreed, adding that curiosity, not caution, is the more valuable trait right now. “A lot of what we assumed about how things work just doesn’t hold anymore,” he said.
In other words, enterprises still treating AI like a futuristic moonshot are already behind.
What changed? According to Altman, it's not just that companies finally figured out how to use the technology — though that's part of it. The models themselves "just work so much more reliably" than they did a year ago. OpenAI's enterprise business has grown dramatically as big companies report the technology can handle tasks they never thought would be possible.
A big driver of that reliability? Context and compute. Retrieval and memory aren’t buzzwords—they’re infrastructure. “The more context you have, the better these systems get,” Ramaswamy noted, especially for agentic applications. Altman pointed to their new coding agent, Codex, as a breakthrough that made him “feel the AGI.” You give it tasks, it disappears for hours, then returns with real work done. Right now, it’s like managing an intern that can work for a couple of hours. Soon, he suggested, it’ll feel like an experienced software engineer that can work for days.
So how close are we to AGI, really? Altman sidestepped hard definitions. Instead, he reframed the whole premise: “It’s not about when we declare victory. It’s about the shockingly smooth exponential we’re on.” He pointed out that if you showed ChatGPT to someone from 2020, "most people would say that's AGI, for sure." The goalposts keep moving as capabilities advance. Ramaswamy compared it to asking whether a submarine swims — technically absurd, but functionally obvious. By the time we call something AGI, we’ll have already moved the goalposts. Again.
Both Altman and Ramaswamy have obvious incentives to encourage rapid AI adoption — OpenAI profits from usage, while Snowflake provides the data infrastructure these AI systems need. Their advice to "just do it" and iterate quickly benefits their bottom lines as much as their customers' operations.
Ramaswamy, who previously built search at Google and the AI search engine Neeva, offered a particularly interesting framework for understanding AI's role. He thinks of search as "setting attention for a model" — a way to help AI systems focus on relevant context rather than getting lost in infinite possibilities. It's search not as information retrieval, but as a cognitive focusing mechanism.
One of the conversation's most intriguing moment came when Guo asked what they'd do with a 1000x more compute. Altman's first answer was refreshingly meta: "I would ask it to work super hard on AI research, figure out how to build like much better models and then ask that much better model what we should do with all the compute." Meanwhile, Ramaswamy offered a more poetic answer: use it to decode RNA expression at scale and revolutionize disease treatment.
It's a response that reveals how seriously they take the possibility of AI-driven scientific discovery. Both leaders expect next year will mark another inflection point where companies can assign their most critical problems to AI systems with massive computational resources.
Big swings, sure. But the bigger message was simpler: the future is already usable. Not perfect. Not complete. But ready enough for real work.
The companies who get that—and get moving—will have a massive head start by the time everyone else catches up.