Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: How Humans Bias AI

In this AI Podcast, Kris Hammond from Narrative Science explains that while it’s easy to think of AI as cold, unbiased, and objective, it is also very good at repeating our own bias against us. “I am not saying that we should give ourselves over to algorithmic decision-making. We should always remember that just as the machine is free of the cognitive biases that often defeat us, we have information about the world that the machine does not. My argument is that, with intelligent systems, we now have the opportunity to be genuinely smarter.”

PRACE Publishes Best Practices for GPU Computing

The European PRACE initiative has published a Best Practices Guide for GPU Computing. “This Best Practice Guide describes GPUs: it includes information on how to get started with programming GPUs, which cannot be used in isolation but as “accelerators” in conjunction with CPUs, and how to get good performance. Focus is given to NVIDIA GPUs, which are most widespread today.”

Call for Papers: HPEC 2017 in Waltham

The IEEE High Performance Extreme Computing Conference (HPEC 2017) has issued its Call for Papers. The conference takes place September 12-14 in Waltham, MA. “HPEC is the largest computing conference in New England and is the premier conference in the world on the convergence of High Performance and Embedded Computing. We are passionate about performance. Our community is interested in computing hardware, software, systems and applications where performance matters. We welcome experts and people who are new to the field.”

Job of the Week: GPU Performance Analysis Architect at Nvidia

Nvidia is seeking a GPU Performance Analysis Architect in our Job of the Week. “The NVIDIA GPU Compute Architecture group is seeking world-class architects to analyze processor and system architecture performance of full applications in machine learning, automotive, and high-performance computing. This position offers the opportunity to have a real impact on the hardware and software that underlies the most exciting trends in modern computing.”

NSF Funds HPC Cluster at Penn State

The Penn State Cyber-Laboratory for Astronomy, Materials, and Physics (CyberLAMP) is acquiring a high-performance computer cluster that will facilitate interdisciplinary research and training in cyberscience and is funded by a grant from the National Science Foundation. The hybrid computer cluster will combine general purpose central processing unit (CPU) cores with specialized hardware accelerators, including the latest generation of NVIDIA graphics processing units (GPUs) and Intel Xeon Phi processors.

GPUs and Flash in Radar Simulation and Anti-Submarine Warfare Applications

In this week’s Sponsored Post, Katie Garrison, of One Stop Systems explains how GPUs and Flash solutions are used in radar simulation and anti-submarine warfare applications. “High-performance compute and flash solutions are not just used in the lab anymore. Government agencies, particularly the military, are using GPUs and flash for complex applications such as radar simulation, anti-submarine warfare and other areas of defense that require intensive parallel processing and large amounts of data recording.”

NVIDIA Pascal GPUs come to Advanced Clustering Technologies

Missouri-based Advanced Clustering Technologies is helping customers solve challenges by integrating NVIDIA Tesla P100 accelerators into its line of high performance computing clusters. Advanced Clustering Technologies builds custom, turn-key HPC clusters that are used for a wide range of workloads including analytics, deep learning, life sciences, engineering simulation and modeling, climate and weather study, energy exploration, and improving manufacturing processes. “NVIDIA-enabled GPU clusters are proving very effective for our customers in academia, research and industry,” said Jim Paugh, Director of Sales at Advanced Clustering. “The Tesla P100 is a giant step forward in accelerating scientific research, which leads to breakthroughs in a wide variety of disciplines.”

Interview: Cray’s Steve Scott on What’s Next for Supercomputing

In this video from KAUST, Steve Scott from at Cray explains where supercomputing is going and why there is a never-ending demand for faster and faster computers. Responsible for guiding Cray’s long term product roadmap in high-performance computing, storage and data analytics, Mr. Scott is chief architect of several generations of systems and interconnects at Cray.

Radio Free HPC Gets the Scoop from Dan’s Daughter in Washington, D.C.

In this podcast, the Radio Free HPC team hosts Dan’s daughter Elizabeth. How did Dan get this way? We’re on a mission to find out even as Elizabeth complains of the early onset of Curmudgeon’s Syndrome. After that, we take a look at the Tsubame3.0 supercomputer coming to Tokyo Tech.

Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”