Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Call for Papers: International Workshop on Accelerators and Hybrid Exascale Systems

The eight annual  International Workshop on Accelerators and Hybrid Exascale Systems (AsHES) has issued its Call for Papers. Held in conjunction with the 32nd IEEE International Parallel and Distributed Processing Symposium, the AsHES Workshop takes place May 23 in Vancouver, Canada. “This workshop focuses on understanding the implications of accelerators and heterogeneous designs on the hardware systems, porting applications, performing compiler optimizations, and developing programming environments for current and emerging systems. It seeks to ground accelerator research through studies of application kernels or whole applications on such systems, as well as tools and libraries that improve the performance and productivity of applications on these systems.”

Video: Molecular Simulation at the Mesoscale

Dr. Rommie E. Amaro gave this talk at SC17. “We are developing new capabilities for multi-scale dynamic simulations that cross spatial scales from the molecular (angstrom) to cellular ultrastructure (near micron), and temporal scales from the picoseconds of macromolecular dynamics to the physiologically important time scales of organelles and cells (milliseconds to seconds).”

Intel’s Al Gara Presents: Technology Opportunities Like Never Before

Al Gara from gave this talk at the Intel HPC Developer Conference in Denver. “Technology visionaries architecting the future of HPC and AI will share the key challenges as well as Intel’s direction. The talk will cover the adaptation of AI into HPC workflows, along their perspective architectural developments, upcoming transitions and range of solutions, technology opportunities, and the driving forces behind them.”

How AI is Reshaping HPC

Karl Freund from Moor Insights gave this talk at SC17. “Researchers have begun putting Machine Learning to work solving problems that do not lend themselves well to traditional numerical analysis, or that require unaffordable computational capacity. This talk with discuss three primary approaches being used today, and will share some case studies that show significant promise of lower latency, improved accuracy, and lower cost.”

PEARC18 Conference Announces lineup of Keynote Speakers

The PEARC18 Conference has announced its lineup of Keynote Speakers. The event takes place July 22-27 in Pittsburgh. “PEARC18 is for everyone who works to realize the promise of advanced computing as the enabler of seamless creativity. Scientists and engineers, scholars and planners, artists and makers, students and teachers all depend on the efficiency, security, reliability and sustainability of increasingly complex and powerful digital infrastructure systems. If your work addresses these challenges in any way, PEARC18 is the forum to share, learn and inspire progress.”

Steve Oberlin from NVIDIA Presents: HPC Exascale & AI

Steve Oberlin from NVIDIA gave this talk at SC17 in Denver. “HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by letting researchers analyze massive amounts of data faster and more effectively. It’s a transformational new tool for gaining insights where simulation alone cannot fully predict the real world.”

Video: Red Hat Showcases ARM Support for HPC at SC17

In this video from SC17, Jon Masters from Red Hat describes the company’s Multi-Architecture HPC capabilities, including the new ARM-powered Apollo 70 server from HPE. “At SC17, you will also have an opportunity to see the power and flexibility of Red Hat Enterprise Linux across multiple architectures, including Arm v8-A, x86_64 and IBM POWER Little Endian.”

Adapting Deep Learning to New Data Using ORNL’s Titan Supercomputer

Travis Johnston from ORNL gave this talk at SC17. “Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don’t require assumptions about independence of parameters and is more computationally feasible than grid-search.”

The AI Future is Closer than it Seems

Gadi Singer gave this talk at the Intel HPC Developer Conference in Denver. “Technology visionaries architecting the future of high-performance computing and artificial intelligence (AI) will share the key challenges as well as Intel’s direction. The talk will cover the adaptation of AI into HPC workflows, along their perspective architectural developments, upcoming transitions and range of solutions, technology opportunities, and the driving forces behind them.”

Visualization on GPU Accelerated Supercomputers

Peter Messmer from NVIDIA gave this talk at SC17. “This talk is a summary about the ongoing HPC visualization activities, as well as a description of the technologies behind the developer-zone shown in the booth.” Messmer is a principal software engineer in NVIDIA’s Developer Technology organization, working with clients to accelerate their scientific discovery process with GPUs.