Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

The Hyperion-insideHPC Interviews: Los Alamos’ Gary Grider Argues Efficiency in HPC Is King and Laments Simulation’s ‘Raw Deal’

It might surprise you to know that Gary Grider, HPC Division Leader at Los Alamos National Laboratory, is less interested in FLOPS than efficiency. In this interview, he explains why “FLOPS” hasn’t appeared in Los Alamos RFPs over the last decade. He also talks about his greatest HPC concern: decreasing interest in classical simulation in […]

Job of the Week: HPC Storage Infrastructure Engineer at NERSC

NERSC is seeking an HPC Storage Infrastructure Engineer for its Storage Systems Group. This group is responsible for architecting, deploying, and supporting the high-performance parallel storage systems relied upon by NERSC’s 7,000 scientific users to conduct basic scientific research across a wide range of disciplines. “The HPC Storage Infrastructure Engineer will work closely with approximately eight other storage systems and software engineers in this group to support and optimize hundreds of petabytes of parallel storage that is served to thousands of clients at terabytes per second.”

Perlmutter supercomputer to include more than 6000 NVIDIA A100 processors

NERSC is among the early adopters of the new NVIDIA A100 Tensor Core GPU processor announced by NVIDIA this week. More than 6,000 of the A100 chips will be included in NERSC’s next-generation Perlmutter system, which is based on an HPE Cray Shasta supercomputer that will be deployed at Lawrence Berkeley National Laboratory later this year. “Nearly half of the workload running at NERSC is poised to take advantage of GPU acceleration, and NERSC, HPE, and NVIDIA have been working together over the last two years to help the scientific community prepare to leverage GPUs for a broad range of research workloads.”

NERSC Finalizes Contract for Perlmutter Supercomputer

NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]

Video: Why Supercomputers Are A Vital Tool In The Fight Against COVID-19

In this video from Forbes, Horst Simon from LBNL describes how supercomputers are being used for coronavirus research. “Computing is stepping up to the fight in other ways too. Some researchers are crowdsourcing computing power to try to better understand the dynamics of the protein and a dataset of 29,000 research papers has been made available to researchers leveraging artificial intelligence and other approaches to help tackle the virus. IBM has launched a global coding challenge that includes a focus on COVID-19 and Amazon has said it will invest $20 million to help speed up coronavirus testing.”

NERSC Rolls Out New Community File System for Next-Gen HPC

NERSC recently unveiled their new Community File System (CFS), a long-term data storage tier developed in collaboration with IBM that is optimized for capacity and manageability. “In the next few years, the explosive growth in data coming from exascale simulations and next-generation experimental detectors will enable new data-driven science across virtually every domain. At the same time, new nonvolatile storage technologies are entering the market in volume and upending long-held principles used to design the storage hierarchy.”

NERSC Supercomputer to Help Fight Coronavirus

“NERSC is a member of the COVID-19 High Performance Computing Consortium. In support of the Consortium, NERSC has reserved a portion of its Director’s Discretionary Reserve time on Cori, a Cray XC40 supercomputer, to support COVID-19 research efforts. The GPU partition on Cori was installed to help prepare applications for the arrival of Perlmutter, NERSC’s next-generation system that is scheduled to begin arriving later this year and will rely on GPUs for much of its computational power.”

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

LBNL Breaks New Ground in Data Center Optimization

Berkeley Lab has been at the forefront of efforts to design, build, and optimize energy-efficient hyperscale data centers. “In the march to exascale computing, there are real questions about the hard limits you run up against in terms of energy consumption and cooling loads,” Elliott said. “NERSC is very interested in optimizing its facilities to be leaders in energy-efficient HPC.”

Supercomputing a Neutron Star Merger

Scientists are getting better at modeling the complex tangle of physics properties at play in one of the most powerful events in the known universe: the merger of two neutron stars. “We’re starting from a set of physical principles, carrying out a calculation that nobody has done at this level before, and then asking, ‘Are we reasonably close to observations or are we missing something important?’” said Rodrigo Fernández, a co-author of the latest study and a researcher at the University of Alberta.