Friedrich-Alexander-Universität: HPC Boosted with xiRAID and Lustre

Sept. 24, 2024 — Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), a research university in Germany, announced it has enhanced its high-performance computing capabilities through an implementation of xiRAID and Lustre technologies. This upgrade, facilitated by Xinnor and MEGWARE, has resulted in performance gains for the GPU cluster “Alex,” a research resource for artificial intelligence, machine learning, atomistic simulations […]

Eviden Announces 2 HPC and Quantum Pacts

Eviden, the advanced computing unit of Atos, announced two quantum computing partnerships this morning, with HPCNow!, a Barcelona-based HPC consulting firm, and with Alice & Bob, a quantum computing company in Paris. With Alice & Bob, Eviden will make Alice & Bob’s cat qubit technology accessible on Qaptiva, Eviden’s quantum application development platform. Eviden said […]

@HPCpodcast: HPC Storage Rock Star Gary Grider Talks How We Got Here and Where We’re Going

http://orionx.net/wp-content/uploads/2022/04/021@HPCpodcas_Storage-with-Gary-Grider_20220420.mp3 Shahin and I are in violent agreement: if you’re interested in high performance storage, if major milestone in the development of HPC storage technology over the last 30-plus years interests you, if you want a peek at future forward leaps in HPC storage technology, then this is an @HPCpodcast episode for you. Our special […]

HPC: Stop Scaling the Hard Way

…today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

Cerebras 1.2 Trillion Chip Integrated with LLNL’s Lassen System for AI Research  

Lawrence Livermore National Laboratory (LLNL) and AI company Cerebras Systems today announced the integration of the 1.2-trillion Cerebras’ Wafer Scale Engine (WSE) chip into the National Nuclear Security Administration’s (NNSA) 23-petaflop Lassen supercomputer. The pairing of Lassen’s simulation capability with Cerebras’ machine learning compute system, along with the CS-1 accelerator system that houses the chip, […]

Argonne Claims Largest Engine Flow Simulation Using Theta Supercomputer

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have conducted what they claim is the largest simulation of flow inside an internal combustion engine. Insights gained from the simulation – run on 51,328 cores of Argonne’s Theta supercomputer – could help auto manufacturers to design greener engines. A blog post on the […]

2020 OpenFabrics Alliance Workshop – Video Gallery

Welcome to the 2020 OpenFabrics Workshop video gallery. The OpenFabrics Alliance (OFA) is focused on accelerating development of high performance fabrics. The annual OFA Workshop, held in virtual format this year, is a premier means of fostering collaboration among those who develop fabrics, deploy fabrics, and create applications that rely on fabrics. It is the […]

SDSC Expanse Supercomputer from Dell Technologies to serve 50,000 Users

In this special guest feature, Janet Morss at Dell Technologies writes that the company will soon deploy a new flagship supercomputer at SDSC. “Expanse will deliver the power of 728 dual-socket Dell EMC PowerEdge C6525 servers with 2nd Gen AMD EPYC processors connected with Mellanox HDR InfiniBand. The system will have 93,000 compute cores and is projected to have a peak speed of 5 petaflops. That will almost double the performance of SDSC’s current Comet supercomputer, also from Dell Technologies.”

UKRI Awards ARCHER2 Supercomputer Services Contract

UKRI has awarded contracts to run elements of the next national supercomputer, ARCHER2, which will represent a significant step forward in capability for the UK’s science community. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh. “ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.”

Job of the Week: Network and Data Specialist at William & Mary

“William & Mary University in Virginia seeks qualified applicants for the position of Research Computing (RC) Network and Data Specialist to be responsible for day-to-day operations, maintenance and hardware support for all networking and file-systems serving RC systems. Systems include Linux-based clusters, departmental servers, and workstations under the control of RC. Network types include Ethernet, Infiniband, and Fibre Channel. File-systems utilize NFS, SMB/CIFS, and Lustre.”