Search Results for: comparison

SK hynix to Supply HBM3 DRAM to NVIDIA

Seoul, June 9, 2022 – SK hynix announced that it began mass production of HBM3, high-performance memory that vertically interconnects multiple DRAM chips “and dramatically increases data processing speed in comparison to traditional DRAM products,” the company said. HBM3 DRAM is the 4th generation HBM product, succeeding HBM (1st generation), HBM2 (2nd generation) and HBM2E […]

More ‘EPYC’ Options for HPC

[SPONSORED CONTENT]  High performance computing (HPC) often sets trends for the data center. Driving innovation, adding new functionality, and enabling simulations to deliver more accuracy, finer details, and insights. Recently AMD released four new AMD EPYC™ 7003 Series processors with AMD 3D V-Cache™ technology. Socket compatible with existing EPYC 7003 processors, the AMD 3D V-Cache […]

ExaIO: Access and Manage Storage of Data Efficiently and at Scale on Exascale Systems

As the word exascale implies, the forthcoming generation exascale supercomputer systems will deliver 1018 flop/s of scalable computing capability. All that computing capability will be for naught if the storage hardware and I/O software stack cannot meet the storage needs of applications running at scale—leaving applications either to drown in data when attempting to write to storage or starve while waiting to read data from storage. Suren Byna, PI of the ExaIO project in the Exascale Computing Project (ECP) and computer staff scientist at Lawrence Berkeley National Laboratory, highlights the need for preparation to address the I/O needs of exascale supercomputers by noting that storage is typically the last subsystem available for testing on these systems.

PSC’s Neocortex HPC Upgrades to Cerebras CS-2 AI Systems

The Neocortex high-performance AI computer at the Pittsburgh Supercomputing Center (PSC) has been upgraded with two new Cerebras CS-2 systems powered by the second-generation wafer-scale engine (WSE-2) processor. PSC said the WSE-2 doubles the system’s cores and on-chip memory as well as offering a new execution mode designed for extreme-scale deep-learning tasks, including larger model […]

ORNL: Updated Exascale Earth Simulation Model Delivers 2X Speed

Oak Ridge National Laboratory announced today that a new version of the Energy Exascale Earth System Model, or E3SM, is two times faster than an earlier version released in 2018. Earth system models have weather-scale resolution and use advanced computers to simulate aspects of earth’s variability and anticipate decadal changes that will critically impact the […]

IBM and Samsung Unveil ‘Semiconductor Breakthrough That Defies Conventional Design’

Today, IBM and Samsung Electronics jointly announced what they said is a breakthrough in semiconductor design utilizing a new transistor architecture that allows more transistors to be packed in an IC chip. The key: the transistors stand up rather than lie down, thus taking up less space and offering “a pathway to the continuation of […]

MLCommons Releases MLPerf Training v1.1 AI Benchmarks

San Francisco — Dec. 1, 2021 – Today, MLCommons, the open engineering consortium, released new results for MLPerf Training v1.1, the organization’s machine learning training performance benchmark suite. MLPerf Training measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including  image classification, object detection, […]

Can HPE GreenLake for HPC Deliver a Simpler User Experience than Public Cloud?

[Sponsored Post] When looking for a simple HPC user experience, many of us would naturally think of public cloud. But on-premises solutions like HPE GreenLake for HPC, powered by AMD EPYC processors, have picked up the advantages of public cloud without compromising performance. HPE GreenLake for HPC is designed to help you get the benefits of HPC without the deployment challenges. It’s a consumption-based solution that is fully managed and operated for you, just like public cloud.

MLPerf Releases Results for HPC v1.0 ML Training Benchmark

Today, MLCommons released new results for MLPerf HPC v1.0, the organization’s machine learning training performance benchmark suite for high-performance computing (HPC). To view the results, visit https://mlcommons.org/en/training-hpc-10/. NVIDIA announced that NVIDIA-powered systems won four of five MLPerf HPC 1.0 tests, which measures training of AI models in three typical workloads for HPC centers: CosmoFlow estimates details […]

Exascale Hardware Evaluation: Workflow Analysis for Supercomputer Procurements

It is well known in the high-performance computing (HPC) community that many (perhaps most) HPC workloads exhibit dynamic performance envelopes that can stress the memory, compute, network, and storage capabilities of modern supercomputers. Optimizing HPC workloads to run efficiently on existing hardware systems is challenging, but attempting to quantify the performance envelopes of HPC workloads to extrapolate performance predictions for HPC workloads on new system architectures is even more challenging, albeit essential. This predictive analysis is beneficial because it helps each data center’s supercomputer procurement team extrapolate to the new machines and system architectures that will deliver the most performance for production workloads at their datacenter. However, once a supercomputer is installed, configured, made available to users, and benchmarked, it is too late to consider fundamental architectural changes.