
Runway, the popular AI video startup, has introduced its latest model, Gen-3 Alpha. The model is capable of creating highly detailed and realistic video clips of up to 10 seconds with impressive fidelity, consistency, and motion. The unveiling of Gen-3 Alpha follows closely on the heels of Luma Labs’ release of its Dream Machine, and Kuaishou's Kling, signaling a rapid evolution in the capabilities of AI-driven video content creation.
Gen-3 Alpha was trained on videos and images and will power Runway's various tools, including Text to Video, Image to Video, and Text to Image. It will also enhance existing features like Motion Brush, Advanced Camera Controls, and Director Mode.
The model can handle complex scene changes and a wide range of cinematic choices. The model has been trained with highly descriptive, temporally dense captions, enabling it to generate imaginative transitions and precise key-framing of elements within a scene. This allows for a level of detail and control previously unattainable in AI video generation.
However, what stands out immediately is the photorealism—especially with human faces, gestures, and emotions. This is easily the closest we have seen a model come to the output generated by OpenAI's Sora.
While the company did not disclose what content the model was trained on, it did stress that it was predominantly a proprietary dataset acquired through various partnerships including the one with Getty Images.
Runway says the development of Gen-3 Alpha was a collaborative effort involving a cross-disciplinary team of research scientists, engineers, and artists. This collaborative approach has ensured that the model is well-suited for creative applications, interpreting a wide range of styles and cinematic terminology.
The startup is also partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha. These customizations allow for more stylistically controlled and consistent characters, tailored to their specific artistic and narrative requirements.
With the generative video space heating up, Runway is hoping to differentiate itself by focusing on building controllable storytelling tools that cater to professional creators.



Importantly, the company has also implemented a new set of safeguards. These include an improved in-house visual moderation system and support for the C2PA provenance standards.
Runway will begin the public rollout of Gen-3 Alpha in the coming days starting with its paid users.