AMD is set to rival Nvidia in the field of artificial intelligence (AI) with the launch of its new Instinct processor, the Instinct MI300X, which boasts the capability to replace multiple GPUs.
During an event in San Francisco, AMD CEO Lisa Su presented the Instinct MI300X, referring to it as “the most complex thing we’ve ever built.” The chip, about the size of a drink coaster, contains an impressive 146 billion transistors. It features up to 192GB of high-bandwidth HBM3 memory, shared by both the CPU and GPU components, and incorporates a total of 13 chiplets on the die. Notably, the MI300X provides a memory bandwidth of 5.2 TB/s, surpassing Nvidia’s H100 by 60%.
At the core of the chip are AMD’s Zen CPU cores and the next-generation CDNA 3 GPU architecture. However, Su emphasized that the primary selling point is the extensive amount of memory offered by the MI300X. She stated, “With MI300X, you can reduce the number of GPUs needed to run the latest large language models, and as model sizes continue growing, this will become even more important.”
According to AMD, the MI300X is eight times more powerful and five times more energy efficient than the existing MI250X, currently utilized in the Frontier supercomputer, the world’s fastest. Lawrence Livermore National Labs plans to implement the MI300X in the El Capitan system, which aims to deliver two-plus exaFLOPS of performance.
Alongside the MI300X, AMD also unveiled the AMD Instinct platform, a server reference design based on Open Compute Project specifications. This platform enables enterprises and hyperscalers to integrate MI300X GPUs into their existing OCP server racks, facilitating AI training and inference workloads. By doing so, AMD aims to accelerate customers’ time to market, reduce development costs, and simplify deployment.
In addition, AMD discussed its fourth-generation EPYC 97X4 processor, codenamed Bergamo, specifically designed for cloud environments. Bergamo features 128 cores with hyperthreading, allowing dual-socket systems to support up to 512 virtual CPUs. Su highlighted that these processors cater to cloud-native workloads, optimized for cloud computing frameworks and running as microservices. Their design focuses on throughput and efficiency, making them well-suited for cloud-based applications.
Sampling of both the MI300X and Bergamo processors is expected to begin in the third quarter.