Microsoft Unveils Azure Maia 100 and Cobalt 100 Chips: Custom Silicon for AI and Cloud Workloads

Microsoft Unveils Azure Maia 100 and Cobalt 100 Chips: Custom Silicon for AI and Cloud Workloads
Image Credit: Maginative

Microsoft has officially unveiled its first two in-house silicon chips, the Azure Maia 100 AI accelerator and the Azure Cobalt CPU, at its Ignite conference. As we reported last month, the new chips represent Microsoft's foray into designing customized processors tailored specifically for its cloud services and AI workloads.

The Maia 100 AI Accelerator

The Azure Maia 100 is Microsoft's inaugural AI accelerator, purpose-built to handle intensive artificial intelligence workloads like training large AI models and running generative AI services.

The Microsoft Azure Maia 100 AI Accelerator is the first designed by Microsoft for large language model training and inferencing in the Microsoft Cloud. Image courtesy of Microsoft.

Microsoft developed the Maia 100 chip based on learnings from its internal AI services such as Bing search, the GitHub Copilot coding assistant, and the powerful natural language models created by OpenAI, an AI company backed by Microsoft. In fact, OpenAI collaborated closely with Microsoft during the Maia 100's design process, providing valuable feedback on optimizing the chip's architecture for large language models.

Some key attributes of the Maia 100 include:

  • 105 billion transistors optimized for speed and efficiency in AI computations.
  • Support for low precision math formats to accelerate training and inference.
  • Liquid cooling "sidekick" system enables higher server density for energy-efficient data center deployment.
  • Designed to slot seamlessly into Microsoft's existing data center infrastructure.

The Maia 100 will power Azure's offerings for generative AI and other cutting-edge AI workloads. Microsoft indicated the chip is already being tested with models like GPT-3.5 Turbo that underpin services such as ChatGPT.

The Azure Cobalt CPU

In addition to its new AI accelerator, Microsoft revealed the Azure Cobalt CPU, its first general purpose ARM-based processor for cloud computing.

Microsoft CEO, Satya Nadella holds up the Cobalt 100 onstage at Microsoft Ignite 2023

The 128-core Cobalt CPU is tailored for running a wide range of cloud workloads while delivering gains in performance and power efficiency. Microsoft claims it offers 40% faster throughput compared to existing ARM chips already deployed in Azure data centers.

Some highlights of the Cobalt CPU:

  • Customized ARM Neoverse core design for cloud optimization.
  • Granular control over performance and power consumption per core.
  • Designed for cloud native next-gen applications.
  • Improved energy efficiency to lower data center carbon footprint.

Microsoft is currently testing the Cobalt CPU with key Azure services like Microsoft Teams and Azure SQL Database. It expects to make Cobalt-powered virtual machines available to customers starting in 2024.

A system-level tester at a Microsoft lab in Redmond, Washington, mimics conditions that a chip will experience inside a Microsoft datacenter. The machine rigorously assesses each chip under real world conditions to ensure it meets performance and reliability standards. Photo by John Brecher for Microsoft.

With its new homegrown chips, Microsoft aims to optimize performance by custom-designing silicon tailored for its software and services. The company can co-engineer each layer of its cloud stack in unison, from the hardware up through the software, infrastructure, and cooling systems.

This vertical integration allows Microsoft cloud to run most efficiently while giving customers more infrastructure choice. The company will continue complementing its own silicon with offerings from partners like AMD and NVIDIA. However, by baking specialization into its chips down to the transistor level, Microsoft hopes to take optimization to the next level.

The Azure Maia and Cobalt unveilings represent only the opening salvo as Microsoft leans into engineering customized silicon. It already has second generations in the works, as it aims to stay on the cutting edge of cloud, AI, and chip innovation.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.