Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPC and Precision Medicine: A New Framework for Alzheimer’s and Parkinson’s

Joe Lombardo from UNLV gave this talk at the HPC User Forum. “The University of Nevada, Las Vegas and the Cleveland Clinic Lou Ruvo Center for Brain Health have been awarded an $11 million federal grant from the National Institutes of Health and National Institute of General Medical Sciences to advance the understanding of Alzheimer’s and Parkinson’s diseases. In this session, we will present how UNLV’s National Supercomputing Institute plays a critical role in this research by fusing brain imaging, neuropsychological and behavioral studies along with the diagnostic exome sequencing models to increase our knowledge of dementia-related and age-associated degenerative disorders.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

The Mont-Blanc project: Updates from the Barcelona Supercomputing Center

Filippo Mantovani from BSC gave this talk at the GoingARM workshop at SC17. “Since 2011, Mont-Blanc has pushed the adoption of Arm technology in High Performance Computing, deploying Arm-based prototypes, enhancing system software ecosystem and projecting performance of current systems for developing new, more powerful and less power hungry HPC computing platforms based on Arm SoC. In this talk, Filippo introduces the last Mont-Blanc system, called Dibona, designed and integrated by the coordinator and industrial partner of the project, Bull/ATOS.”

Video: Project Cyclops comes to SC17 in a Quest to Build the World’s Fastest Node

In this video from SC17, Rich Brueckner from insideHPC describes Project Cyclops, a benchmarking quest to build the world’s fastest single node. The single-node Cyclops supercomputer demonstrates the computational power that individual scientists, engineers, artificial intelligence practitioners, and data scientists can deploy in their offices. Cyclops looks to rank well on the HPCG benchmark.

Video: System Interconnects for HPC

In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

Video: Revolution in Computer and Data-enabled Science and Engineering

Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. “Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice.”

Video: Argonne’s Theta Supercomputer Architecture

Scott Parker gave this talk at the Argonne Training Program on Extreme-Scale Computing. “Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”

A Vision for Exascale: Simulation, Data and Learning

Rick Stevens gave this talk at the recent ATPESC training program. “The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses.”

OpenHPC: Project Overview and Updates

Karl Schulz from Intel gave this talk at the MVAPICH User Group. “There is a growing sense within the HPC community for the need to have an open community effort to more efficiently build, test, and deliver integrated HPC software components and tools. To address this need, OpenHPC launched as a Linux Foundation collaborative project in 2016 with combined participation from academia, national labs, and industry. The project’s mission is to provide a reference collection of open-source HPC software components and best practices in order to lower barriers to deployment and advance the use of modern HPC methods and tools.”