Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Comet Supercomputer Assists in Latest LIGO Discovery

This week’s landmark discovery of gravitational and light waves generated by the collision of two neutron stars eons ago was made possible by a signal verification and analysis performed by Comet, an advanced supercomputer based at SDSC in San Diego. “LIGO researchers have so far consumed more than 2 million hours of computational time on Comet through OSG – including about 630,000 hours each to help verify LIGO’s findings in 2015 and the current neutron star collision – using Comet’s Virtual Clusters for rapid, user-friendly analysis of extreme volumes of data, according to Würthwein.”

MareNostrum Supercomputer to contribute 475 million core hours to European Research

Today the Barcelona Supercomputing Centre announced plans to allocate 475 million core hours of its supercomputer MareNostrum to 17 research projects as part of the PRACE initiative. Of all the nations participating in PRACE’s recent Call for Proposals, Spain is now the leading contributor of compute hours to European research.

Introducing the European EXDCI initiative for HPC

“The European Extreme Data & Computing Initiative (EXDCI) objective is to support the development and implementation of a common strategy for the European HPC Ecosystem. One of the main goals of the meeting in Bologna was to set up a roadmap for future developments, and for other parties who would like to participate in HPC research.”

Intel Joins Open Neural Network Exchange

Jason Knight from Intel writes that the company has joined Microsoft, Facebook, and others to participate in the Open Neural Network Exchange (ONNX) project. “By joining the project, we plan to further expand the choices developers have on top of frameworks powered by the Intel Nervana Graph library and deployment through our Deep Learning Deployment Toolkit. Developers should have the freedom to choose the best software and hardware to build their artificial intelligence model and not be locked into one solution based on a framework. Deep learning is better when developers can move models from framework to framework and use the best hardware platform for the job.”

Intel Delivers 17-Qubit Superconducting Chip with Advanced Packaging to QuTech

Today, Intel announced the delivery of a 17-qubit superconducting test chip for quantum computing to QuTech, Intel’s quantum research partner in the Netherlands. The new chip was fabricated by Intel and features a unique design to achieve improved yield and performance. “Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on a regular basis in our leading-edge manufacturing facilities.”

Video: MareNostrum Supercomputer Powers LIGO Project with 20 Million Processor Hours

Today the Barcelona Supercomputing Center announced it has allocated 20 million processor hours to the LIGO project, the most recent winner of the Nobel Prize for Physics. “The importance of MareNostrum for our work is very easy to explain: without it we could not do the kind of work we do; we would have to change our direction of research.”

HPC4Mfg Program Seeks New Projects

The High Performance Computing for Manufacturing (HPC4Mfg) program in the Energy Department’s Advanced Manufacturing Office (AMO) announced today their intent to issue their fifth solicitation in January 2018 to fund projects that allow manufacturers to use high-performance computing resources at the Department of Energy’s national laboratories to tackle major manufacturing challenges.

Kevin Barker to Lead CENATE Proving Ground for HPC Technologies

The CENATE Proving Ground for HPC Technologies at PNNL has named Kevin Barker as their new Director. “The goal of CENATE is to evaluate innovative and transformational technologies that will enable future DOE leadership class computing systems to accelerate scientific discovery,” said PNNL’s Laboratory Director Steven Ashby. “We will partner with major computing companies and leading researchers to co-design and test the leading-edge components and systems that will ultimately be used in future supercomputing platforms.”

LANL Steps Up to HPC for Materials Program

“Understanding and predicting material performance under extreme environments is a foundational capability at Los Alamos,” said David Teter, Materials Science and Technology division leader at Los Alamos. “We are well suited to apply our extensive materials capabilities and our high-performance computing resources to industrial challenges in extreme environment materials, as this program will better help U.S. industry compete in a global market.”

PSSC Labs to Power Biosoft Devices for Genetics Research

PSSC Labs will work with BSI to create truly, turn-key HPC clusters, servers and storage solutions. PSSC Labs has already delivered several hundred computing platforms for worldwide genomics and bioinformatics research. Utilizing the PowerWulf HPC Cluster as a base solution platform, PSSC Labs and BSI can customize individual components for a specific end user’s research goals.