MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Paul Messina Presents: A Path to Capable Exascale Computing

Paul Messina presented this talk at the 2016 Argonne Training Program on Extreme-Scale Computing. “The President’s NSCI initiative calls for the development of Exascale computing capabilities. The U.S. Department of Energy has been charged with carrying out that role in an initiative called the Exascale Computing Project (ECP). Messina has been tapped to lead the project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project program office is located at Oak Ridge.

Exascale Computing – What are the Goals and the Baseline?

Thomas Schulthess presented this talk at the MVAPICH User Group. “Implementation of exascale computing will be different in that application performance is supposed to play a central role in determining the system performance, rather than just considering floating point performance of the high-performance Linpack benchmark. This immediately raises the question as to what the yardstick will be, by which we measure progress towards exascale computing. I will discuss what type of performance improvements will be needed to reach kilometer-scale global climate and weather simulations. This challenge will probably require more than exascale performance.”

Simulating the Earliest Generations of Galaxies with Enzo and Blue Waters

“Galaxies are complex—many physical processes operate simultaneously, and over a huge range of scales in space and time. As a result, accurately modeling the formation and evolution of galaxies over the lifetime of the universe presents tremendous technical challenges. In this talk I will describe some of the important unanswered questions regarding galaxy formation, discuss in general terms how we simulate the formation of galaxies on a computer, and present simulations (and accompanying published results) that the Enzo collaboration has recently done on the Blue Waters supercomputer. In particular, I will focus on the transition from metal-free to metal-enriched star formation in the universe, as well as the luminosity function of the earliest generations of galaxies and how we might observe it with the upcoming James Webb Space Telescope.”

OpenHPC – Community Building Blocks for HPC Systems

Karl Schulz from Intel presented this talk at the 4th Annual MVAPICH User Group meeting. “Today, many supercomputing sites spend considerable effort aggregating a large suite of open-source projects on top of their chosen base Linux distribution in order to provide a capable HPC environment for their users. This presentation will introduce a new, open-source HPC community (OpenHPC) that is focused on providing HPC-centric package builds for a variety of common building-blocks in an effort to minimize duplication, implement integration testing to gain validation confidence, incorporate ongoing novel R&D efforts, and provide a platform to share configuration recipes from a variety of sites.”

Students Learn Supercomputing at the Summer of HPC in Barcelona

In this video, students describe their learning experience at the 2016 PRACE Summer of HPC program in Barcelona. “The PRACE Summer of HPC is a PRACE outreach and training program that offers summer placements at top HPC centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results.”

Extreme-scale Graph Analysis on Blue Waters

George Slota presented this talk at the Blue Waters Symposium. “In recent years, many graph processing frameworks have been introduced with the goal to simplify analysis of real-world graphs on commodity hardware. However, these popular frameworks lack scalability to modern massive-scale datasets. This work introduces a methodology for graph processing on distributed HPC systems that is simple to implement, generalizable to broad classes of graph algorithms, and scales to systems with hundreds of thousands of cores and graphs of billions of vertices and trillions of edges.”

Video: System Methodology—Holistic Performance Analysis on Modern Systems

“This talk will discuss various system performance issues, and the methodologies, tools, and processes used to solve them. The focus is on single systems (any operating system), including single cloud instances, and quickly locating performance issues or exonerating the system. Many methodologies will be discussed, along with recommendations for their implementation, which may be as documented checklists of tools, or custom dashboards of supporting metrics. In general, you will learn to think differently about your systems, and how to ask better questions.”

Overview of the MVAPICH Project and Future Roadmap

In this video from the 4th Annual MVAPICH User Group, DK Panda from Ohio State University presents: Overview of the MVAPICH Project and Future Roadmap. “This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”

Video: Exploring I/O Challenges at Exascale

“Clear trends in the past and current petascale systems (i.e., Jaguar and Titan) and the new generation of systems that will transition us toward exascale (i.e., Aurora and Summit) outline how concurrency and peak performance are growing dramatically, however, I/O bandwidth remains stagnant. In this talk, we explore challenges when dealing with I/O-ignorant high performance computing systems and opportunities for integrating I/O awareness in these systems.”

Avere Systems Teams with Cycle Computing for High Performance Multi-cloud Orchestration

Today Avere Systems and Cycle Computing announced a technology integration that enables hybrid high-performance computing (HPC) in popular public cloud computing environments. By integrating the Avere vFXT Edge filer cloud bursting technology with Cycle Computing’s CycleCloud offering, users are now able to launch an Avere tiered file system on demand linked directly with the CycleCloud managed scalable compute nodes through cloud providers like AWS, Google Cloud Platform and Microsoft Azure.