Runway continues its rapid expansion of AI video features, launching advanced camera controls for its Gen-3 Alpha Turbo model just days after releasing its Act-One animation tool. The new feature lets you control both the direction and intensity of camera movements in AI-generated videos, giving you more precise control over how scenes unfold.
You can now direct the camera in six different ways - moving horizontally or vertically, panning, tilting, zooming, or rolling. Each movement type can be fine-tuned with intensity settings ranging from subtle shifts to dramatic sweeps through scenes. Want to slowly arc around a subject? Combine a horizontal move with a gentle pan. Need to reveal a broader landscape? Pair an upward tilt with a smooth zoom out.
These controls work alongside text prompts, which help guide the AI in generating consistent scene content as the camera moves. For instance, if you're planning a dramatic zoom-out shot, you can describe the wider scene that should be revealed, helping ensure the AI generates exactly what you're looking for.
The feature builds on Gen-3 Alpha Turbo, Runway's credit-efficient AI video model that trades some quality for faster generation times and lower costs. At 5 credits per second of generated video, it's being positioned as a more accessible option for rapid iteration and experimentation with these new camera moves.
This release comes just a week after Runway launched Act-One, which lets you transfer facial expressions from real performers to AI characters. The quick succession of launches signals the company's push to give creators more granular control over AI-generated video content.
The advanced camera controls are available now in Runway's web interface, accessible through a dedicated camera control panel when using the Gen-3 Alpha Turbo model. You can also check out Runway's help center for more details.