Runway's Act-One Lets You Animate AI-Generated Characters with a Video

Runway's Act-One Lets You Animate AI-Generated Characters with a Video

Runway has introduced a powerful new feature called Act-One that transfers facial expressions and head movements from real-world performers to AI-generated characters within their Gen-3 Alpha platform. Unlike traditional animation pipelines that require complex rigging and motion capture equipment, Act-One needs just two inputs: a single video of a performance and a character image.

It is a radical departure from standard animation workflows or even newer prompt-based ones. You can simply record a performance on any camera, even a smartphone, and Act-One will map that performance onto your AI-generated character across multiple camera angles and focal lengths. The system maintains consistent character appearance while preserving the subtle nuances of the original performance.

"We're now beyond the threshold of asking ourselves if generative models can generate consistent videos," says Runway CEO Cristóbal Valenzuela. "The difference lies in what you ultimately build and how you think about its applications."

What's great about Act-One is its ability to maintain performance fidelity across different character designs and styles. The system faithfully preserves eye-lines, micro-expressions, pacing, and delivery, even when translating them to characters with dramatically different proportions from the original performer. This capability allows for unprecedented character depth in generated performances.

For filmmakers and content creators, this opens up new possibilities in cinematic storytelling. You can now shoot dialogue scenes using a single actor performing multiple roles, with Act-One transferring each performance to unique AI-generated characters while maintaining the original delivery's authenticity. The tool's ability to work across various camera angles adds to its versatility in creating dynamic scenes.

Runway has implemented comprehensive safety measures, including systems to prevent unauthorized content creation with public figures and tools to verify voice usage rights. The company plans to continuously monitor the platform to prevent misuse.

Act-One is being rolled out gradually to Runway users starting today, requiring Gen-3 Alpha model credits for access. This release marks a significant shift in character animation, making sophisticated performance transfer accessible without the need for traditional animation infrastructure.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe