AMD Announces Data Center EPYCs, Releases Instinct Accelerator Details and Software for Generative AI

At a product unveiling event this morning in San Francisco, AMD announced updates to its 4th Gen EPYC “Genoa” 5nm data center CPUs and released additional details on its MI300X GPU accelerator for generative AI. The event included keyote remarks from AMD Chair and CEO Lisa Su, who said AI represents the most significant strategic opportunity for the company, a market AMD expects to grow from $30 billion this year to $150 billion by 2027, a CAGR of 50 percent. Notwithstanding the extensive comments from Su and several of her senior managers this morning about the exploding AI market, they mention GPU market dominator NVIDIA. Though it was obvious by implication that AMD is mounting a major effort to grab GPU market share. On the CPU side, AMD Introduced 4th Gen AMD EPYC 97X4 processors, codenamed “Bergamo,” with 128 Zen 4c cores per socket. AMD said the chips offer the greatest vCPU density and performance for cloud applications, provide up to 2.7x better energy efficiency and support up to 3x more containers per server. In her keynote, Su said Bergamo is the company’s first chip designed specfically for cloud applications.

Atos Launches ‘ThinkAI’ for High Performance AI Applications

Paris, June 28, 2021 – Atos today launched ThinkAI, which the company described as its secure end-to-end scalable offering which enables organizations to successfully design, develop, and deliver high-performance AI applications. “ThinkAI is for organizations using traditional high-performance computing (HPC) that want to run more accurate and faster simulations thanks to AI applications, and also […]

Why HPC and AI Workloads are Moving to the Cloud

This sponsored post from our friends over at Dell Technologies discusses a study by Hyperion Research finds that approximately 20 percent of HPC workloads are now running in the public cloud. There are many good reasons for this trend.

Running AI and HPC Workloads Together on Existing Infrastructure Enhances Return on System Investments

Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. This sponsored post from Intel explores the ins and outs of running AI and HPC workloads together on existing infrastructure, and how organizations can gain rapid insights, and experience faster time-to-market with advanced architecture technologies. 

3 Ways to Unlock the Power of HPC and AI

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Complexities abound as HPC becomes more pervasive across industries and markets, especially as companies adopt, scale, and optimize both HPC and Artificial Intelligence (AI) workloads. Bill Mannel, VP & GM HPC & AI Solutions Segment at Hewlett Packard Enterprise, walks readers through three strategies to ensure HPC and AI success.

The Convergence of HPC & AI: Why it’s Great for Supercomputing and the Enterprise

By the end of 2019, worldwide AI spend is expected to reach $35 billion and more than double by 2022, according to IDC. While AI market projections may be speculative, there’s a general consensus the investment will be significant and the impact will be transformative. Lenovo explores innovation at the convergence of HPC and AI, including early detection of prostate cancer, mitigating the impact of deforestation, preventing visual impairment and more.