The restructuring will see most team members shifted to Meta’s burgeoning generative AI team, while others will join the AI infrastructure unit.
Building on their prior work in image and video generation, these models showcase impressive capabilities in high-quality, diffusion-based text-to-video generation and controlled image editing using text instructions.
This policy will go into effect in the new year and will be required globally.
With on-premises solutions, customers maintain complete data sovereignty and model governance. IP protection and regulatory compliance can be ensured, reducing risks associated with public cloud options.
The breakthroughs aim to enable real-world collaborative robots and augmented reality assistants.
The Llama Impact Grants program seeks to identify and support compelling applications of Llama 2 that provide social benefits across the globe.
The new capabilities allow advertisers to easily generate multiple variations of ad images and text. This helps them tailor campaigns for its different platforms and audiences while saving significant time and resources.
From a versatile conversational assistant to quirky bot personalities, Meta is bringing generative AI to the billions who use its apps.
The key innovation behind Emu is a technique called "quality tuning" that dramatically enhances the visual appeal of images produced by AI text-to-image models.
The features leverage generative AI to enable more creative expression and personalized assistance for users.
Following the AI breakthroughs from OpenAI and other tech giants, Meta rushed to release its own large language models that it hoped could compete. But with limited compute power to go around, the teams within FAIR clashed over resource allocation, sparking resentment and defections.
BELEBELE represents the largest parallel multilingual benchmark ever created specifically for reading comprehension.
By opening up DINOv2 under a more permissive license and introducing FACET for fairness benchmarking, Meta is setting a positive example for responsible AI development.
Code Llama, which is built on top of Llama 2, is free for research and commercial use and outperforms other open-source, code-specific LLMs.
The project represents Meta's endeavor to create a unified multilingual system capable of catering to all language translation needs.
This will give Watsonx users access to Llama 2, a 70 billion parameter generative AI model fine-tuned by Meta for natural language tasks.
The company says responsible innovation can’t happen in isolation. By open sourcing its research and resulting models, it hopes to ensure that everyone has equal access.
The research represents a potential game-changer for material science, enabling catalyst simulations at unprecedented speeds and scales.
With the release of LLaMA 2, Meta presents developers worldwide with unprecedented access to a state-of-the-art foundational AI model - opening new frontiers for exploration and innovation.
As AI reshapes our world, we find ourselves paradoxically dependent on the benevolence of tech giants; their decisions to open-source, or not, are defining the future of this transformative technology.
With its versatile capabilities and improved performance, CM3leon represents a significant step towards higher-fidelity image generation and understanding, paving the way for enhanced creativity and applications in the metaverse.
This new model is a significant leap forward in AI speech synthesis, demonstrating its versatility and efficiency by outperforming existing models in various tasks, even those it was not specifically trained for.
The MMS project is more than another AI breakthrough. It's a landmark effort to protect the diversity of human languages and take communication technology to the next level.
Meta's latest AI model, ImageBind, learns from six different modalities to analyze and understand information holistically, pushing the boundaries of multimodal AI systems.