MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Extreme-scale Graph Analysis on Blue Waters

George Slota presented this talk at the Blue Waters Symposium. “In recent years, many graph processing frameworks have been introduced with the goal to simplify analysis of real-world graphs on commodity hardware. However, these popular frameworks lack scalability to modern massive-scale datasets. This work introduces a methodology for graph processing on distributed HPC systems that is simple to implement, generalizable to broad classes of graph algorithms, and scales to systems with hundreds of thousands of cores and graphs of billions of vertices and trillions of edges.”

Video: System Methodology—Holistic Performance Analysis on Modern Systems

“This talk will discuss various system performance issues, and the methodologies, tools, and processes used to solve them. The focus is on single systems (any operating system), including single cloud instances, and quickly locating performance issues or exonerating the system. Many methodologies will be discussed, along with recommendations for their implementation, which may be as documented checklists of tools, or custom dashboards of supporting metrics. In general, you will learn to think differently about your systems, and how to ask better questions.”

Overview of the MVAPICH Project and Future Roadmap

In this video from the 4th Annual MVAPICH User Group, DK Panda from Ohio State University presents: Overview of the MVAPICH Project and Future Roadmap. “This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”

Video: Exploring I/O Challenges at Exascale

“Clear trends in the past and current petascale systems (i.e., Jaguar and Titan) and the new generation of systems that will transition us toward exascale (i.e., Aurora and Summit) outline how concurrency and peak performance are growing dramatically, however, I/O bandwidth remains stagnant. In this talk, we explore challenges when dealing with I/O-ignorant high performance computing systems and opportunities for integrating I/O awareness in these systems.”

Avere Systems Teams with Cycle Computing for High Performance Multi-cloud Orchestration

Today Avere Systems and Cycle Computing announced a technology integration that enables hybrid high-performance computing (HPC) in popular public cloud computing environments. By integrating the Avere vFXT Edge filer cloud bursting technology with Cycle Computing’s CycleCloud offering, users are now able to launch an Avere tiered file system on demand linked directly with the CycleCloud managed scalable compute nodes through cloud providers like AWS, Google Cloud Platform and Microsoft Azure.

Tutorial: GPU Performance Nuggets

In this video from the 2016 Blue Waters Symposium, GPU Performance Nuggets – Carl Pearson and Simon Garcia De Gonzalo from the University of Illinois present: GPU Performance Nuggets. “In this talk, we introduce a pair of Nvidia performance tools available on Blue Waters. We discuss what the GPU memory hierarchy provides for your application. We then present a case study that explores if memory hierarchy optimization can go too far.”

Video: Combining Simulation & Experiment for Nanoscale 3-D Printing

In this video, ORNL researchers use supercomputers to simulate nanomanufacturing, the process of building microscopic devices atom by atom. Simulated here is the construction of a 250-nanometer 3-D cube by focused electron beam induced deposition.

Video: What is Driving Heterogeneity in HPC?

Wen-mei Hwu from the University of Illinois at Urbana-Champaign presented this talk at the Blue Waters Symposium. “In the 21st Century, we are able to understand, design, and create what we can compute. Computational models are allowing us to see even farther, going back and forth in time, learn better, test hypothesis that cannot be verified any other way, and create safe artificial processes.”

Nvidia Donates DGX-1 Machine Learning Supercomputer to OpenAI Non-profit

This week Nvidia CEO Jen-Hsun Huang hand-delivered one of the company’s new DGX-1 Machine Learning supercomputers to the OpenAI non-profit in San Francisco. “The DGX-1 is a huge advance,” OpenAI Research Scientist Ilya Sutskever said. “It will allow us to explore problems that were completely unexplored before, and it will allow us to achieve levels of performance that weren’t achievable.”

Taming Heterogeneity in HPC – The DEEP-ER take

Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”