Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


RCE Podcast Looks at SAGE2 Scalable Amplified Group Environment

In this RCE Podcast, Brock Palen and Jeff Squyres speak with the creators of SAGE2 Scalable Amplified Group Environment. SAGE2 is a browser tool to enhance data-intensive, co-located, and remote collaboration. “The original SAGE software, developed in 2004 and adopted at over one hundred international sites, was designed to enable groups to work in front of large shared displays in order to solve problems that required juxtaposing large volumes of information in ultra high-resolution. We have developed SAGE2, as a complete redesign and implementation of SAGE, using cloud-based and web-browser technologies in order to enhance data intensive co-located and remote collaboration.”

Tutorial on In-Network Computing: SHARP Technology for MPI Offloads

“Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors intelligent network devices, which manipulate data traversing the data-center network, SHARP technology designed to offload collective operation processing to the network. This tutorial will provide an overview of SHARP technology, integration with MPI, SHARP software components and live example of running MPI collectives.”

GPUs and Flash in Radar Simulation and Anti-Submarine Warfare Applications

In this week’s Sponsored Post, Katie Garrison, of One Stop Systems explains how GPUs and Flash solutions are used in radar simulation and anti-submarine warfare applications. “High-performance compute and flash solutions are not just used in the lab anymore. Government agencies, particularly the military, are using GPUs and flash for complex applications such as radar simulation, anti-submarine warfare and other areas of defense that require intensive parallel processing and large amounts of data recording.”

NVIDIA Pascal GPUs come to Advanced Clustering Technologies

Missouri-based Advanced Clustering Technologies is helping customers solve challenges by integrating NVIDIA Tesla P100 accelerators into its line of high performance computing clusters. Advanced Clustering Technologies builds custom, turn-key HPC clusters that are used for a wide range of workloads including analytics, deep learning, life sciences, engineering simulation and modeling, climate and weather study, energy exploration, and improving manufacturing processes. “NVIDIA-enabled GPU clusters are proving very effective for our customers in academia, research and industry,” said Jim Paugh, Director of Sales at Advanced Clustering. “The Tesla P100 is a giant step forward in accelerating scientific research, which leads to breakthroughs in a wide variety of disciplines.”

Hammer PLC to Distribute Spectra Logic Storage in Europe

Today UK-based Hammer PLC announced that it will be a distributer of Spectra Logic storage technology in Europe. “This is an excellent opportunity to increase our high-performance computing offering to our partners and customers,” said Jason Beeson, Hammer’s Commercial Director. “By adding Spectra Logic’s bespoke data workflow storage solutions we can reach a whole new genre of highly data-dependent users who are seeking a complete data workflow, from input and day-to-day use right through to deep storage and archiving.”

Interview: Cray’s Steve Scott on What’s Next for Supercomputing

In this video from KAUST, Steve Scott from at Cray explains where supercomputing is going and why there is a never-ending demand for faster and faster computers. Responsible for guiding Cray’s long term product roadmap in high-performance computing, storage and data analytics, Mr. Scott is chief architect of several generations of systems and interconnects at Cray.

Radio Free HPC Gets the Scoop from Dan’s Daughter in Washington, D.C.

In this podcast, the Radio Free HPC team hosts Dan’s daughter Elizabeth. How did Dan get this way? We’re on a mission to find out even as Elizabeth complains of the early onset of Curmudgeon’s Syndrome. After that, we take a look at the Tsubame3.0 supercomputer coming to Tokyo Tech.

XGC Fusion Code Selected for all 3 Pre-exascale Supercomputers

When the DOE’s pre-exascale supercomputers come online soon, all three will be running an optimized version of the XGC dynamic fusion code. Developed by a team at the DOE’s Princeton Plasma Physics Laboratory (PPPL), the XGC code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s.

Adrian Cockcroft Presents: Shrinking Microservices to Functions

In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. “We’ve seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit.”

Call for Exhibitors: PASC17 in Lugano

Industry and academic institutions are invited to showcase their R&D at PASC17, an interdisciplinary event in high performance computing that brings together domain science, applied mathematics and computer science. The event takes place June 26-28 in Lugano, Switzerland. “The PASC17 Conference offers a unique opportunity for your organization to gain visibility at a national and international level, to showcase your R&D and to network with leaders in the fields of HPC simulation and data science. PASC17 builds on a successful history – with 350 attendees in 2016 – and continues to expand its program and international profile year on year.”