Search Results for: comparison

MLCommons Releases MLPerf Inference v1.1 Results 

San Francisco – September 22, 2021 – Today, MLCommons, an open engineering consortium, released new results for MLPerf Inference v1.1, the organization’s machine learning inference performance benchmark suite. MLPerf Inference measures the performance of applying a trained machine learning model to new data for a wide variety of applications and form factors, and optionally includes […]

Taking Altair Structural & CFD Analysis to New Heights with the 3rd Gen AMD EPYC™ Processors

At the recent ISC 2021 virtual event in June, I sat down with Kevin Mayo of AMD and InsideHPC’s Doug Black. We discussed how the latest AMD EPYC™ processors are helping accelerate Altair applications for the most demanding engineering workloads. AMD and Altair have been busily collaborating to deliver better performance for resource-intensive FEA and CFD applications. In this sponsored post, we share some benchmarks resulting from these efforts below.

American Meteorological Society to Present Zelinka with Houghton Award

The Council of the American Meteorological Society (AMS) has selected atmospheric scientist Mark Zelinka of Lawrence Livermore National Laboratory (LLNL) to receive the Henry G. Houghton Award. Zelinka was cited by AMS for “innovative advances in understanding the critical involvement of clouds to achieve a better understanding of climate interactions.” According to AMS, the Henry G. Houghton […]

Using WRF at LLNL to Simulate Nuclear Cloud Rise

For decades, understanding the behavior of a nuclear mushroom cloud was done with careful analysis of observations made during the testing era. Old photos, outdated film and incomplete weather data made precise calculations difficult. Now, with results published in Atmospheric Environment, Lawrence Livermore National Laboratory (LLNL) scientists are improving our understanding of nuclear cloud rise using a […]

MLCommons Launches MLPerf Tiny AI Inference Benchmark

Today, open engineering consortium MLCommons released a new benchmark, MLPerf Tiny Inference to measure trained neural network AI inference performance for low-power devices in small form factors. MLPerf Tiny v0.5 is MLCommons’s first inference benchmark suite for embedded device machine learning, a growing field in which AI-driven sensor data analytics is performed in real-time, close […]

ORNL’s Gina Accawi: Designing Software for Manufacturing Cybersecurity

May 27, 2021 — As a computer engineer at Oak Ridge National Laboratory, Gina Accawi has long been the quiet and steady force behind some of the Department of Energy’s most widely used online tools and applications. She has written the code that industry throughout the United States relies on – from MEASUR, the platform […]

Unlocking Cosmological Secrets at Durham University

Drawing on the power of a supercomputer from Dell Technologies and AMD, Durham University and DiRAC scientists are expanding our understanding of the universe and its origins. In scientific circles, everyone knows that more computing power can lead to bigger discoveries in less time. This is the case at Durham University in the U.K., where researchers are unlocking insights into our universe with powerful high performance computing clusters from Dell Technologies.

New Hypre Library Approach Brings GPU-Based Algebraic Multigrid to Exascale and HPC Community

First developed in 1998, the hypre team has adapted their cross-platform high performance library to support a variety of machine architectures over the years. Hypre supports scalable solvers and preconditioners that can be applied to large sparse linear systems on parallel computers.[i] Their latest work now gives scientists the ability to efficiently utilize modern GPU-based extreme scale parallel supercomputers to address many scientific problems.

Rice Univ. Researchers Claim 15x AI Model Training Speed-up Using CPUs

Reports are circulating in AI circles that researchers from Rice University claim a breakthrough in AI model training acceleration – without using accelerators. Running AI software on commodity x86 CPUs, the Rice computer science team say  neural networks can be trained 15x faster than platforms utilizing GPUs. If valid, the new approach would be a double boon for organizations implementing AI strategies: faster model training using less costly microprocessors.

Cerebras Systems: 2.5 Trillion-plus Transistors in 7nm-based 2nd Gen Wafer Scale Engine

Maker of the world’s largest microprocessors, Cerebras Systems, today unveiled what it said is the largest AI chip, the Wafer Scale Engine 2 (WSE-2) — successor to the first WSE introduced in 2019. The 7nm-based WSE-2 exceeds Cerebras’ previous world record with a chip that has 2.6 trillion transistors and 850,000 AI-optimized cores. By comparison, […]