Intel Unveils Advanced Xeon Processors for AI at Hot Chips 2024
At the Hot Chips 2024 conference held at Stanford University, Intel presented its latest processors and an innovative interconnect technology. The highlight was the new Xeon 6 SoC, equipped with a cutting-edge optical interconnect, crucial for efficient data transfer in AI applications.
Although the Xeon 6 series, code-named Granite Rapids-D, had been previously introduced, Intel revealed additional insights during the event. The processors are expected to hit the market in the first half of 2025.
Granite Rapids-D is engineered for versatile scalability, supporting a broad range of devices from edge systems to more complex edge nodes, all within a unified system architecture that integrates AI acceleration. The Xeon 6 SoC merges Intel’s compute chiplet from the Xeon 6 series with an I/O chiplet tailored for edge applications, resulting in significant advancements in performance, energy efficiency, and transistor density compared to its predecessors.
Built with insights from over 90,000 edge installations, Granite Rapids-D offers up to 32 lanes of PCI Express 5.0, 16 lanes of Compute Express Link (CXL) 2.0, and dual 100G Ethernet connectivity.
Enhancements specific to edge computing include broader temperature tolerances and industrial-grade reliability. The processors also feature advanced media acceleration to boost video transcoding and analytics for live OTT, VOD, and broadcast media, along with Advanced Vector Extensions and Advanced Matrix Extensions to enhance inferencing capabilities.
A pivotal component of the processor is its high-speed optical interconnect. Intel’s Integrated Photonics Solutions Group showcased their latest optical compute interconnect (OCI) chiplet, co-packaged with an Intel CPU and demonstrated live at the event. The OCI chiplet manages 64 channels of 32 gigabits-per-second data transmission bi-directionally over fiber optics extending up to 100 meters.
This development is set to play a significant role in scaling CPU/GPU clusters and supporting new computing architectures. These include coherent memory expansion and resource disaggregation, which are vital for the evolving AI infrastructure in data centers and high-performance computing (HPC).
In addition to the Xeon processors, Intel introduced a client product under the code name Lunar Lake, aimed at the next generation of AI-powered PCs. Lunar Lake integrates a mix of high-performance cores (P-cores) and efficient cores (E-cores) along with a new neural processing unit, boasting up to four times the generative AI capability compared to the previous generation.
Lunar Lake also includes updated Xe2 graphics processing unit cores, delivering a 1.5x improvement in gaming and graphics performance over its predecessor.