Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Evolution of MATLAB

Cleve Moler from MathWorks gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include: Data analysis, exploration, and visualization.”

Gordon Bell Prize Finalists to Present their work at SC17

SC17 has announced the finalists for the Gordon Bell Prize in High Performance Computing. The $10,000 prize will be presented to the winner at the conference in Denver next month. “The Gordon Bell Prize recognizes the extraordinary progress made each year in the innovative application of parallel computing to challenges in science, engineering, and large-scale data analytics. Prizes may be awarded for peak performance or special achievements in scalability and time-to-solution on important science and engineering problems.”

HPC in Agriculture: NCSA and Syngenta’s Dynamic Partnership

In this video, Jim Mellon from Sygenta describes how the company’s partnership with NCSA is helping the company answer the agricultural challenges of the future. “Together, we’re solving some of the toughest issues in agriculture today, like how to feed our rapidly growing population knowing that the amount of land we have for growing crops is finite. NCSA Industry provides the HPC resources that Syngenta’s scientists need to solve these issues, as well as an industry focus on security, performance, and availability, with the consultancy to better understand how to maximize these resources.”

Searching for Human Brain Memory Molecules with the Piz Daint Supercomputer

Scientists at the University of Basel are using the Piz Daint supercomputer at CSCS to discover interrelationships in the human genome that might simplify the search for “memory molecules” and eventually lead to more effective medical treatment for people with diseases that are accompanied by memory disturbance. “Until now, searching for genes related to memory capacity has been comparable to seeking out the proverbial needle in a haystack.”

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

Video: How R-Systems Helps Customers Move HPC to the Cloud

In this video from the HPC User Forum in Milwaukee, Brian Kucic from R-Systems describes how the company enables companies of all sizes to move their technical computing workloads to the Cloud. “R Systems provides High Performance Computer Cluster resources and technical expertise to commercial and institutional research clients through the R Systems brand and the Dell HPC Cloud Services Partnership. In addition to our industry standard solutions, R Systems Engineers assist clients in selecting the components of their optimal cluster configuration.”

Comet Supercomputer Assists in Latest LIGO Discovery

This week’s landmark discovery of gravitational and light waves generated by the collision of two neutron stars eons ago was made possible by a signal verification and analysis performed by Comet, an advanced supercomputer based at SDSC in San Diego. “LIGO researchers have so far consumed more than 2 million hours of computational time on Comet through OSG – including about 630,000 hours each to help verify LIGO’s findings in 2015 and the current neutron star collision – using Comet’s Virtual Clusters for rapid, user-friendly analysis of extreme volumes of data, according to Würthwein.”

HPC Connects: Mapping Global Ocean Currents

In this video from the SC17 HPC Connects series, Dimitris Menemenlis from NASA JPL/Caltech describes how supercomputing enables scientists to accurately map global ocean currents. The ocean is vast and there are still a lot of unknowns. We still can’t represent all the conditions and are pushing the boundaries of current supercomputer power,” said Menemenlis. “This is an exciting time to be an oceanographer who can use satellite observations and numerical simulations to push our understanding of ocean circulation forward.”

Visualization in Software using Intel Xeon Phi processors

“Intel has been at the forefront of working with software partners to develop solutions for visualization of data that will scale in the future as many core systems such as the Intel Xeon Phi processor scale. The Intel Xeon Phi processor is extremely capable of producing visualizations that allow scientists and engineers to interactively view massive amounts of data.”

MareNostrum Supercomputer to contribute 475 million core hours to European Research

Today the Barcelona Supercomputing Centre announced plans to allocate 475 million core hours of its supercomputer MareNostrum to 17 research projects as part of the PRACE initiative. Of all the nations participating in PRACE’s recent Call for Proposals, Spain is now the leading contributor of compute hours to European research.