Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: How R-Systems Helps Customers Move HPC to the Cloud

In this video from the HPC User Forum in Milwaukee, Brian Kucic from R-Systems describes how the company enables companies of all sizes to move their technical computing workloads to the Cloud. “R Systems provides High Performance Computer Cluster resources and technical expertise to commercial and institutional research clients through the R Systems brand and the Dell HPC Cloud Services Partnership. In addition to our industry standard solutions, R Systems Engineers assist clients in selecting the components of their optimal cluster configuration.”

Comet Supercomputer Assists in Latest LIGO Discovery

This week’s landmark discovery of gravitational and light waves generated by the collision of two neutron stars eons ago was made possible by a signal verification and analysis performed by Comet, an advanced supercomputer based at SDSC in San Diego. “LIGO researchers have so far consumed more than 2 million hours of computational time on Comet through OSG – including about 630,000 hours each to help verify LIGO’s findings in 2015 and the current neutron star collision – using Comet’s Virtual Clusters for rapid, user-friendly analysis of extreme volumes of data, according to Würthwein.”

HPC Connects: Mapping Global Ocean Currents

In this video from the SC17 HPC Connects series, Dimitris Menemenlis from NASA JPL/Caltech describes how supercomputing enables scientists to accurately map global ocean currents. The ocean is vast and there are still a lot of unknowns. We still can’t represent all the conditions and are pushing the boundaries of current supercomputer power,” said Menemenlis. “This is an exciting time to be an oceanographer who can use satellite observations and numerical simulations to push our understanding of ocean circulation forward.”

MareNostrum Supercomputer to contribute 475 million core hours to European Research

Today the Barcelona Supercomputing Centre announced plans to allocate 475 million core hours of its supercomputer MareNostrum to 17 research projects as part of the PRACE initiative. Of all the nations participating in PRACE’s recent Call for Proposals, Spain is now the leading contributor of compute hours to European research.

HPC I/O for Computational Scientists

Phil Carns from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Darshan is a scalable HPC I/O characterization tool. It captures an accurate but concise picture of application I/O behavior with minimum overhead. Darshan was originally developed on the IBM Blue Gene series of computers deployed at the Argonne Leadership Computing Facility, but it is portable across a wide variety of platforms include the Cray XE6, Cray XC30, and Linux clusters.  Darshan routinely instruments jobs using up to 786,432 compute cores on the Mira system at ALCF.”

How Manufacturing will Leap Forward with Exascale Computing

In this special guest feature, Jeremy Thomas from Lawrence Livermore National Lab writes that exascale computing will be a vital boost to the U.S. manufacturing industry. “This is much bigger than any one company or any one industry. If you consider any industry, exascale is truly going to have a sizeable impact, and if a country like ours is going to be a leader in industrial design, engineering and manufacturing, we need exascale to keep the innovation edge.”

ESnet’s Science DMZ Could Help Transfer and Protect Medical Research Data

The Science DMZ architecture developed for moving large data sets quick and securely could be adapted to meet the needs of the medical research community. “Like other sciences, medical research is generating increasingly large datasets as doctors track health trends, the spread of diseases, genetic causes of illness and the like. Effectively using this data for efforts ranging from stopping the spread of deadly viruses to creating precision medicine treatments for individuals will be greatly accelerated by the secure sharing of the data, while also protecting individual privacy.”

Call for Contributions: PEARC18 in Pittsburgh

The PEARC18 Conference has issued its Call for Contributions. The conference takes place June 22-27 in Pittsburgh. “The Practice & Experience in Advanced Research Computing (PEARC) annual conference series fosters the creation of a dynamic and connected community of advanced research computing professionals who advance leading practices at the frontiers of research, scholarship and teaching, and industry application.”

Jesús Labarta from BSC to receive Ken Kennedy Award

Today ACM and IEEE Computer Society named Jesús Labarta of the Barcelona Supercomputing Center as the recipient of the 2017 ACM-IEEE CS Ken Kennedy Award. Labarta is recognized for his seminal contributions to programming models and performance analysis tools for high performance computing. The award will be presented at SC17.

Video: The AI Initiative at NIST

Michael Garris from NIST gave this talk at the HPC User Forum. “AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in AI technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies.”