Odyssey, a generative AI startup, just secured an $18 million Series A to build digital replicas of our physical world. The round, led by EQT Ventures, brings Odyssey’s total funding to $27 million since it launched a year ago.
Why it matters: Odyssey's approach could transform how virtual worlds are created for films and games by combining real-world data collection with AI technology.
The Challenge: While digital twins and world models are evolving, there are still significant gaps. Existing methods like Neural Radiance Fields (NeRFs) and Gaussian Splatting are incredible for photorealism but lack effective editability and consistency when generating environments from scratch. Odyssey’s goal is to bridge these gaps—creating a unified model that combines real-world learnings, photorealism, and easy editability.
The big picture: The startup, founded by self-driving car veterans, Oliver Cameron and Jeff Hawke, is taking an unusual approach to generative AI by sending teams with sensor-packed backpacks to capture the physical world in precise detail.
By the numbers:
- $18 million Series A funding led by EQT Ventures
- $27 million total funding to date
- 13.5K resolution capture capability
- 25-pound backpack system with six cameras and two lidars
Details: Each backpack system works like a mobile scanning station, capturing everything from urban landscapes to remote wilderness areas. The company's AI will use this data to generate customizable virtual environments.
"We think it will be impossible for generative models to generate Hollywood-grade worlds that feel alive without training on a vast volume of rich, multimodal real-world 3D data," Odyssey stated in explaining their approach.
What's next: The funding will help Odyssey:
- Scale up data collection operations across California
- Develop new 3D representation technology
- Eventually expand to other states and countries
Between the lines: This project shares DNA with Google Street View but aims higher – creating editable, AI-generated worlds rather than just capturing static images. The startup is betting that their approach—training on diverse, rich, multimodal 3D data—is the key to unlocking the next generation of immersive cinematic experiences in film and gaming.