MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Radio Free HPC Looks Back at ISC 2015

rfhpc

In this video, Dan Olds and Rich Brueckner from Radio Free HPC discuss the latest news in High Performance Computing from the ISC 2015 conference in Frankfurt, Germany.

Cray XC40 Coming to Bureau of Meteorology in Australia

bureau-of-meteorology-logo-primary_6

Today Cray announced the Bureau of Meteorology in Australia has awarded the Company a contract worth up to $53 million to provide a Cray XC40 supercomputer and a Cray Sonexion 2000 storage system. This further strengthens Cray’s leadership position in the global operational weather and climate community, as an increasing number of the world’s leading centers rely on Cray supercomputers to run their complex meteorological and mission critical models.

High Performance Computing in Defense Intelligence

onestop_truck

Technological advancements in hardware and software products allow analysts to process larger amounts of data rapidly, allowing them time to apply human judgment and experience to intelligence problems. This article examines a couple of the hardware advancements in HPC.

Pleiades Supercomputer Moves Up the Ranks with Haswell

Pleiades_two_row_small

NASA reports that it’s newly upgraded Pleiades supercomputer ranks number 11 on the July 2015 TOP500 list of the most powerful supercomputers. And while the LINPACK computing power of Pleiades jumped nearly 21 percent, its ranking at number 5 on the new HPCG benchmark list reflects its ability to tackle real world applications.

Argonne Names Distinguished Fellows for 2015

Barry F. Smith

Argonne National Laboratory has named Barry Smith, Charles Macal and Branko Ruscic as its 2015 Distinguished Fellows.

CEA Taps Bull Atos for Next Step Towards Exascale

bull

Atos, the multinational digital services company, has signed a contract with the French Alternative Energies and Atomic Energy Commission, the CEA, to supply Tera1000, a 25 Petaflop Bull supercomputer intended as a forerunner to Exaflop supercomputing by the end of 2020.

New UCX Network Communication Framework for Next-Gen Programming Models

ucx

UCX is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for HPC applications. “The path to Exascale, in addition to many other challenges, requires programming models where communications and computations unfold together, collaborating instead of competing for the underlying resources. In such an environment, providing holistic access to the hardware is a major component of any programming model or communication library. With UCX, we have the opportunity to provide not only a vehicle for production quality software, but also a low-level research infrastructure for more flexible and portable support for the Exascale-ready programming models.”

IBM and NVIDIA Launch Centers of Excellence at ORNL and LLNL

openpower

Today IBM along with Nvidia and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.

Dynamically Downscaling Climate Models

Average winter precipitation rate (mm per day) for a 10-year period (1995 to 2004) as simulated by a regional climate model with 12-km spatial resolution (top) and a global climate model with 250-km spatial resolution (bottom).

Credit:
Jiali Wang, Argonne National Laboratory

Jim Collins writes that a research team from Argonne National Laboratory and the University of Chicago is using the Mira supercomputer to investigate the effectiveness of dynamically downscaled climate models. :We are now able to submit several simulations at one time, which allows us run simulations two to four times faster than before.”

Radio Free HPC Looks at Supercomputing Global Flood Maps

bubble

In this podcast, the Radio Free HPC team looks at how the KatRisk startup is using GPUs on the Titan supercomputer to calculate global flood maps. “KatRisk develops event-based probabilistic models to quantify portfolio aggregate losses and exceeding probability curves. Their goal is to develop models that fully correlate all sources of flood loss including explicit consideration of tropical cyclone rainfall and storm surge.”