Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Application Profiling at the HPCAC High Performance Center

“To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance from tests conducted at the HPC Advisory Council High Performance Center.”

The REX Neo: An Energy Efficient New Processor Architecture

“With a team of four and less than $2 million, REX has taken a design concept to reality in under a year with a 16 core processor manufactured on a modern TSMC 28nm process node. With this test silicon, REX is breaking the traditional semiconductor industry idea that it takes large teams along with tens or even hundreds of millions of dollars to deliver a groundbreaking processor. This talk will feature an overview of the Neo ISA, microarchitecture review of the first test silicon, along with a live hardware/software demonstration.”

Beyond the Moore’s Law Cliff: The Next 1000X

In this video from the 2017 HPC Advisory Council Stanford Conference, Subhasish Mitra from Stanford presents: Beyond the Moore’s Law Cliff: The Next 1000X. Professor Subhasish Mitra directs the Robust Systems Group in the Department of Electrical Engineering and the Department of Computer Science of Stanford University, where he is the Chambers Faculty Scholar of Engineering. Prior to joining Stanford, he was a Principal Engineer at Intel Corporation. He received Ph.D. in Electrical Engineering from Stanford University.

Video: Multi-Physics Methods, Modeling, Simulation & Analysis

“Through multiscale simulation of the circulatory system, it is now possible to model this surgery and optimize it using the state of the art optimization techniques. In-silico analysis has allowed us to test new surgical design without posing any risk to patient’s life. I will show the outcome of this study, which is a novel surgical option that may revolutionize current clinical practice.”

Video: Containerizing Distributed Pipes

Hagen Toennies from Gaikai Inc. presented these Best Practices at the 2017 HPC Advisory Council Stanford Conference. “In this talk we will present how we enable distributed, Unix style programming using Docker and Apache Kafka. We will show how we can take the famous Unix Pipe Pattern and apply it to a Distributed Computing System.”

TACC’s Dan Stanzione on the Challenges Driving HPC

In this video from KAUST, Dan Stanzione, executive director of the Texas Advanced Computing Center, shares his insight on the future of high performance computing and the challenges faced by institutions as the demand for HPC, cloud and big data analysis grows. “Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.”

Video: An Overview of the Blue Waters Supercomputer at NCSA

In this video, Robert Brunner from NCSA presents: Blue Waters System Overview. “Blue Waters is one of the most powerful supercomputers in the world. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.”

RCE Podcast Looks at SAGE2 Scalable Amplified Group Environment

In this RCE Podcast, Brock Palen and Jeff Squyres speak with the creators of SAGE2 Scalable Amplified Group Environment. SAGE2 is a browser tool to enhance data-intensive, co-located, and remote collaboration. “The original SAGE software, developed in 2004 and adopted at over one hundred international sites, was designed to enable groups to work in front of large shared displays in order to solve problems that required juxtaposing large volumes of information in ultra high-resolution. We have developed SAGE2, as a complete redesign and implementation of SAGE, using cloud-based and web-browser technologies in order to enhance data intensive co-located and remote collaboration.”

Tutorial on In-Network Computing: SHARP Technology for MPI Offloads

“Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors intelligent network devices, which manipulate data traversing the data-center network, SHARP technology designed to offload collective operation processing to the network. This tutorial will provide an overview of SHARP technology, integration with MPI, SHARP software components and live example of running MPI collectives.”

Adrian Cockcroft Presents: Shrinking Microservices to Functions

In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. “We’ve seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit.”