Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Best Practices between HPC centers and Industrial Users

Bill Kramer from NCSA gave this talk at the HPC User Forum in Milwaukee. “NCSA and Hyperion Research released a new study that examines HPC and Industry partnerships. Aimed at identifying and understanding best practices in partnerships between public high performance computing centers and private industry, the study aims to promote the vital transfer of scientific knowledge to industry and the important transfer of industrial experience to the scientific community.”

Video: Characterization and Benchmarking of Deep Learning

 Natalia Vassilieva from HP Labs gave this talk at the HPC User Forum in Milwaukee. “Our Deep Learning Cookbook is based on a massive collection of performance results for various deep learning workloads on different hardware/software stacks, and analytical performance models. This combination enables us to estimate the performance of a given workload and to recommend an optimal hardware/software stack for that workload. Additionally, we use the Cookbook to detect bottlenecks in existing hardware and to guide the design of future systems for artificial intelligence and deep learning.”

Video: The DOE Exascale Computing Project

Doug Koethe from ORNL gave this talk at the HPC User Forum in Milwaukee. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem.”

HPC Challenges in Simulating the World’s Most Powerful Tornados

“What makes this work significant is the use of supercomputing resources to produce simulations of supercells where data is saved with extremely high spatial and temporal resolution, and the use of visualization techniques (such as volume rendering and trajectory clouds) to produce video that exposes the highly variable flow features that occur throughout the life of the simulated storms. Some of these simulations contain long lived tornadoes producing near-surface winds exceeding 300 mph.”

Trends in the Worldwide HPC Market

In this video from the HPC User Forum in Milwaukee, Earl Joseph and Steve Conway from Hyperion Research present and update on HPC, AI, and Storage markets. “Hyperion Research forecasts that the worldwide HPC server-based AI market will expand at a 29.5% CAGR to reach more than $1.26 billion in 2021, up more than three-fold from $346 million in 2016.”

Video: Europe’s HPC Strategy

Leonardo Flores from the European Commission gave this talk at the HPC User Forum in Milwaukee. “High-Performance Computing is a strategic resource for Europe’s future as it allows researchers to study and understand complex phenomena while allowing policy makers to make better decisions and enabling industry to innovate in products and services. The European Commission funds projects to address these needs.”

Agenda Posted for September HPC User Forum in Milwaukee

Hyperion Research has posted the preliminary agenda for the HPC User Forum Sept. 5-7 in Milwaukee, Wisconsin. “The HPC User Forum community includes thousands of people from the steering committee, member organizations, sponsors and everyone who has attended an HPC User Forum meeting. Our mission is to promote the health of the global HPC industry and address issues of common concern to users.”

OpenHPC: A Comprehensive System Software Stack

Bob Wisniewski from Intel presents: OpenHPC: A Cohesive and Comprehensive System Software Stack. “OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries.

Leveraging HPC for Real-Time Quantitative Magnetic Resonance Imaging

W. Joe Allen from TACC gave this talk at the HPC User Forum. “The Agave Platform brings the power of high-performance computing into the clinic,” said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. “This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient.”

Video: IBM Datacentric Servers & OpenPOWER

“Big data analytics, machine learning and deep learning are among the most rapidly growing workloads in the data center. These workloads have the compute performance requirements of traditional technical computing or high performance computing, coupled with a much larger volume and velocity of data. Conventional data center architectures have not kept up with the needs for these workloads. To address these new client needs, IBM has adopted an innovative, open business model through its OpenPOWER initiative.”