Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

AMD Targets HPC-AI for 30x CPU-GPU Energy Efficiency Boost by 2025

AMD (NASDAQ: AMD) today announced a goal to deliver a 30x increase in energy efficiency for AMD EPYC CPUs and AMD Instinct GPU accelerators used in artificial intelligence (AI) training and high performance computing (HPC) applications by 2025.1 Achieving this will will require AMD to increase the energy efficiency of a compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the last five years2, according to the company.
“Looking at current and projected computing demands, the AMD team decided that it would be most meaningful to focus on the component that constitutes the fastest-growing segments in datacenter consumption — accelerated computing…,” wrote Sam Naffziger, AMD SVP, corporate fellow and product technology architect, in a blog today. “To achieve this goal, AMD engineers will prioritize driving major efficiency gains in accelerated compute nodes through architecture, silicon design, software and packaging innovations while publicly benchmarking our progress annually.
AMD said increased energy efficiency for accelerated computing applications is part of the company’s new goals in environmental, social, governance (ESG) across operations, supply chain and products. In addition to compute node performance/watt measurements1, AMD uses segment-specific data center power utilization effectiveness (PUE) with equipment utilization taken into account.3 The energy consumption baseline uses the same industry energy per operation improvement rates as from 2015-2020, extrapolated to 2025. The measure of energy per operation improvement in each segment from 2020-2025 is weighted by the projected worldwide volumes4 multiplied by the Typical Energy Consumption (TEC) of each computing segment to arrive at a meaningful metric of actual energy usage improvement worldwide.

Accelerated compute nodes are the most powerful and advanced computing systems in the world used for scientific research and large-scale supercomputer simulations. They provide the computing capability used by scientists to achieve breakthroughs across many fields including material sciences, climate predictions, genomics, drug discovery and alternative energy. Accelerated nodes are also integral for training AI neural networks that are currently used for activities including speech recognition, language translation and expert recommendation systems, with similar promising uses over the coming decade. The 30x goal would save billions of kilowatt hours of electricity in 2025, reducing the power required for these systems to complete a single calculation by 97 percent over five years.

“Achieving gains in processor energy efficiency is a long-term design priority for AMD and we are now setting a new goal for modern compute nodes using our high-performance CPUs and accelerators when applied to AI training and high-performance computing deployments,” said Mark Papermaster, executive vice president and CTO, AMD. “Focused on these very important segments and the value proposition for leading companies to enhance their environmental stewardship, AMD’s 30x goal outpaces industry energy efficiency performance in these areas by 150 percent compared to the previous five-year time period.”

“With computing becoming ubiquitous from edge to core to cloud, AMD has taken a bold position on the energy efficiency of its processors, this time for the accelerated compute for AI and High Performance Computing applications,” said Addison Snell, CEO of Intersect360 Research. “Future gains are more difficult now as the historical advantages that come with Moore’s Law have greatly diminished. A 30-times improvement in energy efficiency in five years will be an impressive technical achievement that will demonstrate the strength of AMD technology and their emphasis on environmental sustainability.”

 

Leave a Comment

*

Resource Links: