rabbit Reveals New Details on r1 AI Device, Including ElevenLabs Voice Integration

rabbit Reveals New Details on r1 AI Device, Including ElevenLabs Voice Integration

rabbit, the creators of the highly anticipated r1 AI device, have revealed new details about their device ahead of its imminent release. The company has partnered with ElevenLabs, a leading AI audio research company, to provide lifelike voice interactions for the r1.

A first look at rabbit Inc’s AI-Powered r1
Physically, r1 sports a unique design with a rotating 360 camera for computer vision, a push-to-talk button, and an analog scroll wheel. It supports 4G LTE networks for global connectivity.

The r1, first introduced at CES in January, has undergone extensive updates and improvements leading up to its launch. According to rabbit's founder and CEO, Jesse Lyu, the device is designed to be a standalone, voice-controlled personal assistant that seamlessly blends cross-app command execution. At the heart of the r1 is the Large Action Model (LAM), which understands multi-level commands and can handle complex tasks, from ordering food to generating images based on the contents of your fridge, as seen through the device's rabbit eye camera.

ElevenLabs, known for their emotionally rich and realistic AI voices across 29 languages, is providing their proprietary audio AI models to power the r1's voice interactions. "We're working with rabbit to bring the future of human-device interaction closer," said Mati Staniszewski, CEO of ElevenLabs. "Our collaboration is about making the r1 a truly dynamic co-pilot."

The first batch of r1 devices is set to leave the factory on March 31, with deliveries to U.S. customers expected to begin around April 24. On launch, the r1 will feature a range of capabilities, including conversation with LLMs, up-to-date search with Perplexity, AI vision, bi-directional translation, and note-taking with AI summaries. LAM-powered features will include music, generative AI, rideshare, and food ordering.

rabbit is taking a careful approach to introducing LAM to the public, focusing initially on the most commonly used apps and conducting thorough quality testing. Over time, LAM is expected to offer many new features and improved latency. The company is also working on an experimental "Teach Mode," which will allow users to train their own "rabbits" to perform specific tasks on niche apps and workflows, even without coding experience.

Updates to the r1 will be largely effortless for users, with many happening in the cloud. The companion rabbit hole portal will help users manage their connected services, with rabbits able to operate apps on the user's behalf while respecting consumer apps and reporting any issues accurately.

rabbit is collaborating with large language model vendors like Perplexity, Anthropic, and OpenAI to provide the most suitable models for understanding user intentions. The company is also actively contributing to open research and engineering in areas such as web and app automation frameworks, agent interaction benchmarks, and on-device compute.

It's worth noting that rabbit isn't the only player in this emerging market; next month, Humane is also set to begin shipping its Ai pin.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.