6 Ways AWS and NVIDIA Are Partnering to Drive Generative AI Innovation

6 Ways AWS and NVIDIA Are Partnering to Drive Generative AI Innovation

At GTC 2024, AWS and NVIDIA announced a multi-faceted partnership aimed at accelerating generative AI innovation. The collaboration spans advanced infrastructure, software, and services that enable customers to unlock new AI capabilities and push the boundaries of what's possible in the field. Here are six key ways AWS and NVIDIA are joining forces:

  1. AWS to Offer NVIDIA Grace Blackwell GPU-based Instances and DGX Cloud AWS will offer NVIDIA's cutting-edge Blackwell GPU platform, including the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs. When combined with AWS's powerful networking, advanced virtualization, and hyper-scale clustering, this integration allows customers to build and run real-time inference on multi-trillion parameter large language models (LLMs) faster and at a larger scale compared to previous-generation NVIDIA GPUs on Amazon EC2.
  2. Enhanced AI Security with AWS Nitro System, Encrypted EFA, and Blackwell Encryption The partnership prioritizes AI security by integrating the AWS Nitro System, Elastic Fabric Adapter (EFA) encryption, and AWS Key Management Service with Blackwell encryption. This combination provides customers with end-to-end control over their training data and model weights, ensuring stronger security for AI applications on AWS.
  3. Project Ceiba: Building One of the Fastest AI Supercomputers on AWS AWS and NVIDIA are collaborating on Project Ceiba to build one of the world's fastest AI supercomputers exclusively on AWS using DGX Cloud for NVIDIA's own AI research and development. Project Ceiba will feature an impressive 20,736 GB200 Superchips capable of processing 414 exaflops, representing a 6x performance increase over earlier plans based on the Hopper architecture.
  4. NVIDIA's AI R&D to Leverage Project Ceiba's Unprecedented Computing Power Project Ceiba, the AI supercomputer built exclusively on AWS with DGX Cloud, will be utilized by NVIDIA research and development teams to advance AI for LLMs, graphics, simulation, digital biology, robotics, self-driving cars, NVIDIA Earth-2 climate prediction, and more. This unprecedented computing power will help NVIDIA propel future generative AI innovation.
  5. Amazon SageMaker Integration with NVIDIA NIM Inference Microservices AWS and NVIDIA have joined forces to offer high-performance, low-cost inference for generative AI through the integration of Amazon SageMaker with NVIDIA NIM inference microservices. This combination allows customers to quickly deploy foundation models optimized for NVIDIA GPUs to SageMaker, reducing time-to-market for generative AI applications.
  6. Accelerating AI Innovation in Healthcare and Life Sciences The collaboration between AWS and NVIDIA extends to healthcare and life sciences, with the expansion of computer-aided drug discovery using new NVIDIA BioNeMo foundation models for generative chemistry, protein structure prediction, and understanding drug-target interactions. These models will soon be available on AWS HealthOmics, a purpose-built service for healthcare and life sciences organizations to store, query, and analyze various omics data.

The partnership between AWS and NVIDIA goes back more than 13 years, and together they have delivered a wide range of NVIDIA GPU solutions for customers. With the new NVIDIA Blackwell processor, the companies aim to provide a significant step forward in generative AI and GPU computing. By combining AWS's powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters' hyper-scale clustering, and unique Nitro system's advanced virtualization and security capabilities, customers can build and run multi-trillion parameter LLMs faster, at massive scale, and more securely than anywhere else.

"AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries," said Jensen Huang, founder and CEO of NVIDIA. "Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what's possible."

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.