Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Gordon Bell Prize Finalists to Present their work at SC17

SC17 has announced the finalists for the Gordon Bell Prize in High Performance Computing. The $10,000 prize will be presented to the winner at the conference in Denver next month. “The Gordon Bell Prize recognizes the extraordinary progress made each year in the innovative application of parallel computing to challenges in science, engineering, and large-scale data analytics. Prizes may be awarded for peak performance or special achievements in scalability and time-to-solution on important science and engineering problems.”

Searching for Human Brain Memory Molecules with the Piz Daint Supercomputer

Scientists at the University of Basel are using the Piz Daint supercomputer at CSCS to discover interrelationships in the human genome that might simplify the search for “memory molecules” and eventually lead to more effective medical treatment for people with diseases that are accompanied by memory disturbance. “Until now, searching for genes related to memory capacity has been comparable to seeking out the proverbial needle in a haystack.”

Call for Papers: Supercomputing Frontiers Europe 2018

The Supercomputing Frontiers Europe 2018 conference has issued its Call for Papers. The conference takes place March 12 – 15, 2018 in Warsaw, Poland. “Supercomputing Frontiers is an annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important visionary trends and substantial innovations in supercomputing. Organized by ICM UW, Supercomputing Frontiers Europe 2018 will explore visionary trends and innovations in high performance computing.”

Video: How R-Systems Helps Customers Move HPC to the Cloud

In this video from the HPC User Forum in Milwaukee, Brian Kucic from R-Systems describes how the company enables companies of all sizes to move their technical computing workloads to the Cloud. “R Systems provides High Performance Computer Cluster resources and technical expertise to commercial and institutional research clients through the R Systems brand and the Dell HPC Cloud Services Partnership. In addition to our industry standard solutions, R Systems Engineers assist clients in selecting the components of their optimal cluster configuration.”

Comet Supercomputer Assists in Latest LIGO Discovery

This week’s landmark discovery of gravitational and light waves generated by the collision of two neutron stars eons ago was made possible by a signal verification and analysis performed by Comet, an advanced supercomputer based at SDSC in San Diego. “LIGO researchers have so far consumed more than 2 million hours of computational time on Comet through OSG – including about 630,000 hours each to help verify LIGO’s findings in 2015 and the current neutron star collision – using Comet’s Virtual Clusters for rapid, user-friendly analysis of extreme volumes of data, according to Würthwein.”

HPC Connects: Mapping Global Ocean Currents

In this video from the SC17 HPC Connects series, Dimitris Menemenlis from NASA JPL/Caltech describes how supercomputing enables scientists to accurately map global ocean currents. The ocean is vast and there are still a lot of unknowns. We still can’t represent all the conditions and are pushing the boundaries of current supercomputer power,” said Menemenlis. “This is an exciting time to be an oceanographer who can use satellite observations and numerical simulations to push our understanding of ocean circulation forward.”

MareNostrum Supercomputer to contribute 475 million core hours to European Research

Today the Barcelona Supercomputing Centre announced plans to allocate 475 million core hours of its supercomputer MareNostrum to 17 research projects as part of the PRACE initiative. Of all the nations participating in PRACE’s recent Call for Proposals, Spain is now the leading contributor of compute hours to European research.

HPC I/O for Computational Scientists

Phil Carns from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Darshan is a scalable HPC I/O characterization tool. It captures an accurate but concise picture of application I/O behavior with minimum overhead. Darshan was originally developed on the IBM Blue Gene series of computers deployed at the Argonne Leadership Computing Facility, but it is portable across a wide variety of platforms include the Cray XE6, Cray XC30, and Linux clusters.  Darshan routinely instruments jobs using up to 786,432 compute cores on the Mira system at ALCF.”

How Manufacturing will Leap Forward with Exascale Computing

In this special guest feature, Jeremy Thomas from Lawrence Livermore National Lab writes that exascale computing will be a vital boost to the U.S. manufacturing industry. “This is much bigger than any one company or any one industry. If you consider any industry, exascale is truly going to have a sizeable impact, and if a country like ours is going to be a leader in industrial design, engineering and manufacturing, we need exascale to keep the innovation edge.”

Podcast: Intel to Ship Neural Network Processor by end of year

Intel’s Naveen Rao writes that Intel will soon be shipping the world’s first family of processors designed from the ground up for artificial intelligence. As announced today, the new chip will be the company’s first step towards it goal of achieving 100 times greater AI performance by 2020. “The goal of this new architecture is to provide the needed flexibility to support all deep learning primitives while making core hardware components as efficient as possible.”