Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


1000x Faster Deep-Learning at Petascale Using Intel Xeon Phi Processors

A cumulative effort over several years to scale the training of deep-learning neural networks has resulted in the first demonstration of petascale deep-learning training performance, and further to deliver this performance when solving real science problems. The result reflects the combined efforts of NERSC (National Energy Research Scientific Computing Center), Stanford and Intel to solve real world use cases rather than simply report on performance benchmarks.

How Manufacturing will Leap Forward with Exascale Computing

In this special guest feature, Jeremy Thomas from Lawrence Livermore National Lab writes that exascale computing will be a vital boost to the U.S. manufacturing industry. “This is much bigger than any one company or any one industry. If you consider any industry, exascale is truly going to have a sizeable impact, and if a country like ours is going to be a leader in industrial design, engineering and manufacturing, we need exascale to keep the innovation edge.”

Video: The AI Initiative at NIST

Michael Garris from NIST gave this talk at the HPC User Forum. “AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in AI technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies.”

Radio Free HPC Looks at how How Blockchain Could Prevent Fake News

In this podcast, the Radio Free HPC team looks at Henry Newman’s recent proposal to use Blockchain as a way to combat Fake News. Henry shares that this rant result from what he saw as an egregious story that was going around that could have been easily quashed.

How the POP Performance Optimization Centre at BSC is Speeding up HPC in Europe

In this video, Jesús Labarta from the Barcelona Supercomputing Center describes the POP (Performance Optimization and Productivity) Centre of Excellence led by BSC. According to Labarta, POP’s performance analysis results in performance improvements ranging from 10-15% to more than 10 times. Best of all, the POP services are free of charge to organizations / SMEs / ISVs / companies in the EU!

Introducing the European EXDCI initiative for HPC

“The European Extreme Data & Computing Initiative (EXDCI) objective is to support the development and implementation of a common strategy for the European HPC Ecosystem. One of the main goals of the meeting in Bologna was to set up a roadmap for future developments, and for other parties who would like to participate in HPC research.”

Radio Free HPC Previews the SC17 Plenary on Smart Cities

In this podcast, the Radio Free HPC team looks at Smart Cities. As the featured topic this year at the SC17 Plenary, the Smart Cities initiative looks to improve the quality of life for residents using urban informatics and other technologies to improve the efficiency of services.

Mapping of the Opportunities for Government, Academia, and Industry Engagement in HPC

Mark Sims (DoD) and Bob Sorensen from Hyperion Research gave this talk at the HPC User Forum in Milwaukee. Here, they demonstrate an exciting new tool that aims to map HPC centers across the USA.

Supporting Diverse HPC Workloads on a Single Cluster

 High Performance Computing is extending its reach into new areas. Not only are modeling and simulation being used more widely, but deep learning and other high performance data analytics (HPDA) applications are becoming essential tools across many disciplines. This sponsored post from Intel explores how Plymouth University’s High Performance Computer Centre (HPCC) used Intel HPC Orchestrator to support diverse workloads as it recently deployed a new 1,500-core cluster. 

How Can We Bring Apps to Racks?

In this special guest feature, Rosemary Dr Rosemary Francis from Ellexus describes why the customized nature of HPC is not a sustainable path forward for the next generation. “The downside is that many of our systems and tools are inaccessible to non-expert users. For example, deep learning is bringing more and more scientists closer towards HPC, but while they bring their knowledge, they also bring their high expectations for what they believe IT can do and not necessarily an understanding of how it works.”