Grace is NVIDIA’s first data center CPU, an Arm-based processor that can outperform today’s fastest servers on the most demanding AI and high-performance computing workloads.
The NVIDIA GraceTM CPU is the culmination of more than 10,000 engineering years of work and is designed to meet the computational needs of the world’s most sophisticated applications, such as natural language processing, recommender systems, and AI supercomputing, which analyze large datasets and require both ultra-fast compute speed and massive memory.
It integrates energy-efficient Arm CPU cores with a low-power memory subsystem to deliver high performance with low power consumption.
“Today’s computing architecture is being pushed to its limits by cutting-edge AI and data science, which is handling unimaginable volumes of data,” said Jensen Huang, NVIDIA’s founder, and CEO.
“NVIDIA has developed Grace as a CPU especially for giant-scale AI and HPC using licensed Arm IP. Grace, when combined with the GPU and DPU, has the third fundamental computing infrastructure, as well as the potential to re-architect the data center to advance AI. NVIDIA has grown to become a three-chip company.”
About NVIDIA Grace
Grace is a highly advanced processor designed to handle workloads including training next-generation NLP models with over a trillion parameters.
A Grace CPU-based device can provide 10x faster performance than today’s state-of-the-art NVIDIA DGXTM-based systems, which operate on x86 CPUs when closely coupled with NVIDIA GPUs.
Although modern CPUs are supposed to support the vast majority of data centers, Grace is named after Grace Hopper, a leader of computer programming in the United States which will represent a niche segment of computing.
The Swiss National Supercomputing Centre (CSCS) and the Los Alamos National Laboratory of the United States Department of Energy are the first to announce proposals to develop Grace-powered supercomputers to aid national science research.
Grace is being introduced by NVIDIA as the amount of data and the complexity of AI models grows exponentially. The most advanced AI models today have billions of parameters and double every two and a half months. To remove device bottlenecks, they’ll need a new CPU that can be closely combined with a GPU.
Grace was designed by NVIDIA by using Arm’s data center architecture’s unprecedented versatility.
NVIDIA is advancing the mission of technological diversity in the AI and HPC communities, where choice is critical to providing the creativity required to address the world’s most pressing challenges, by launching a new server-class CPU.
Arm CEO Simon Segars said, “As the world’s most commonly licensed processor architecture, Arm pushes engineering in incredible new ways every day.” “NVIDIA’s announcement of the Grace data center CPU exemplifies how Arm’s licensing model facilitates a significant innovation, one that will help to promote the incredible work of AI researchers and scientists around the world.”
NVIDIA Grace’s Further Pushes The Limits Of Science And Artificial Intelligence
Grace-powered supercomputers, designed by Hewlett Packard Enterprise, will be operational in 2023 at both CSCS and Los Alamos National Laboratory.
CSCS Director Prof. Thomas Schulthess said, “NVIDIA’s novel Grace CPU helps us to converge AI technologies and classic supercomputing for solving some of the toughest problems in computational science.” “We are ecstatic to have the latest NVIDIA CPU available to our users in Switzerland and around the world for the collection and analysis of large and complex science datasets.”
“This next-generation architecture will shape our institution’s computing strategy with a creative combination of memory bandwidth and capacity,” said Thom Mason, director of the Los Alamos National Laboratory. “We will be able to provide sophisticated science analysis using high-fidelity 3D models and analytics with databases that are bigger than historically possible thanks to NVIDIA’s recent Grace CPU.”
NVIDIA Promises To Deliver A “Breakthrough Performance” With Its NVIDIA Grace
Grace’s success is powered by NVIDIA’s fourth-generation NVLink® interconnect technology, which offers a record 900 GB/s connectivity between Grace and NVIDIA GPUs, allowing for 30x more combined bandwidth than today’s leading servers.
Grace will also use a cutting-edge LPDDR5x memory subsystem, which will have twice the bandwidth and ten times the energy consumption of DDR4 memory.
Furthermore, the latest architecture offers unified cache coherence with a single memory address space, simplifying programmability by integrating framework and HBM GPU memory.
Grace will be supported by the NVIDIA HPC software development kit, as well as the entire suite of CUDA® and CUDA-XTM libraries, which accelerate over 2,000 GPU programs, allowing scientists and engineers to work on the world’s most pressing problems to make faster discoveries.
NVIDIA Grace – When Will It Available?
As per the latest notification by NVIDIA, their latest and greatest NVIDIA Grace will be released in the first quarter of 2023 however, the exact release date is yet to be announced, we will be updating you as soon as it gets confirmed.