MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


LLNL & Rensselaer Polytechnic to Promote Industrial HPC

Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute will combine decades of expertise to help American industry and businesses expand use of high performance computing under a recently signed memorandum of understanding.

Cray Scales Fluent to 129,000 Compute Cores

Today Cray announced a world record by scaling ANSYS Fluent to 129,000 compute cores. “Less than a year ago, ANSYS announced Fluent had scaled to 36,000 cores with the help of NCSA. While the nearly 4x increase over the previous record is significant, it tells only part of the story. ANSYS has broadened the scope of simulations allowing for applicability to a much broader set of real-world problems and products than any other company offers.”

Australia Connects to US Pacific Research Platform

Today the California Institute for Telecommunications and Information Technology (Calit2) and Australia’s Academic and Research Network (AARNet) announced a partnership to connect Australian researchers to the US Pacific Research Platform (PRP), a next generation data sharing network linking research universities and supercomputing centers at unprecedented speeds.

Compute Canada Joins Women in HPC Network

Compute Canada has become the first international partner to join the Women in High Performance Computing (WHPC) network. “Achieving gender balance in advanced research computing is an important goal for Compute Canada,” said Mark Dietrich, President and Chief Executive Officer Compute Canada. “This is not just an important equality and balance issue. We know achieving gender balance, and diversity in general, improves innovation and research outputs. In order to meet the growing demand for HPC skillsets that address the increasing requirements of key industrial and academic sectors we must support and grow our skill base in this area.”

Advancing HPC with Collaboration & Co-design

In this special guest feature from Scientific Computing World, Tom Wilkie reports on two US initiatives for future supercomputers, announced at the ISC in Frankfurt in July.

TACC Puts Chameleon Cloud Testbed into Production

Today the Texas Advanced Computing Center (TACC) announced that its new Chameleon testbed is in full production for researchers across the country. Designed to help investigate and develop the promising future of cloud-based science, the NSF-funded Chameleon is a configurable, large-scale environment for testing and demonstrate new concepts.

Intel and Micron Announce 3D XPoint Non-Volatile Memory

Today Intel Corporation and Micron Technology unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionize any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash in 1989.

Univa Joins CNCF Cloud Native Computing Foundation

Today Univa joins Google, IBM, and other world-class companies founding members of the Cloud Native Computing Foundation (CNCF). The new CNCF organization will accelerate the development of cloud native applications and services by advancing a technology stack for data center containerization and microservices.

IBM and NVIDIA Launch Centers of Excellence at ORNL and LLNL

Today IBM along with Nvidia and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.

Radio Free HPC Looks at Supercomputing Global Flood Maps

In this podcast, the Radio Free HPC team looks at how the KatRisk startup is using GPUs on the Titan supercomputer to calculate global flood maps. “KatRisk develops event-based probabilistic models to quantify portfolio aggregate losses and exceeding probability curves. Their goal is to develop models that fully correlate all sources of flood loss including explicit consideration of tropical cyclone rainfall and storm surge.”