Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


1000x Faster Deep-Learning at Petascale Using Intel Xeon Phi Processors

A cumulative effort over several years to scale the training of deep-learning neural networks has resulted in the first demonstration of petascale deep-learning training performance, and further to deliver this performance when solving real science problems. The result reflects the combined efforts of NERSC (National Energy Research Scientific Computing Center), Stanford and Intel to solve real world use cases rather than simply report on performance benchmarks.

HPC Software Stacks: Making High Performance Computing More Accessible

More than 200 researchers at the University of Pisa use high performance computing (HPC) systems in their quantum chemistry, nanophysics, genome sequencing, engineering simulations and areas such as big data analysis. The University’s IT Center has moved to a pre-integrated HPC software stack designed to simplify software installation and maintenance. This sponsored post from Intel shows how a  pre-integrated, validated and supported HPC software stack allows the University of Pisa to focus on research. 

Simplifying HPC Software Stack Management

While most of the fundamental HPC system software building blocks are now open source, dealing with the sheer number of components and their inherently complex interdependencies has created a barrier to adoption of HPC for many organizations. This is the first article in a four-part series that explores using Intel HPC Orchestrator to solve HPC software stack management challenges.

Accelerating Quantum Chemistry for Drug Discovery

In the pharmaceutical industry, drug discovery is a long and expensive process. This sponsored post from Nvidia explores how the University of Florida and University of North Carolina developed an anakin-me neural network engine to produce computationally fast quantum mechanical simulations with high accuracy at a very low cost to speed drug discovery and exploration.

Supporting Diverse HPC Workloads on a Single Cluster

 High Performance Computing is extending its reach into new areas. Not only are modeling and simulation being used more widely, but deep learning and other high performance data analytics (HPDA) applications are becoming essential tools across many disciplines. This sponsored post from Intel explores how Plymouth University’s High Performance Computer Centre (HPCC) used Intel HPC Orchestrator to support diverse workloads as it recently deployed a new 1,500-core cluster. 

Common Myths Stalling Organizations From Cloud Adoption

Cloud adoption is accelerating at the blink of an eye, easing the burden of managing data-rich workloads for enterprises big and small. Yet, common myths and misconceptions about the hybrid cloud are delaying enterprises from reaping the benefits. “In this article, we will debunk five of the top most commonly believed myths that keep companies from strengthening their infrastructure with a hybrid approach.”

GPUs Accelerate Population Distribution Mapping Around the Globe

With the Earth’s population at 7 billion and growing, understanding population distribution is essential to meeting societal needs for infrastructure, resources and vital services. This article highlights how NVIDIA GPU-powered AI is accelerating mapping and analysis of population distribution around the globe. “If there is a disaster anywhere in the world,” said Bhaduri, “as soon as we have imaging we can create very useful information for responders, empowering recovery in a matter of hours rather than days.”

Solving AI Hardware Challenges

For many deep learning startups out there, buying AI hardware and a large quantity of powerful GPUs is not feasible. So many of these startup companies are turning to cloud GPU computing to crunch their data and run their algorithms. Katie Rivera, of One Stop Systems, explores some of the AI hardware challenges that can arise, as well as the new tools designed to tackle these issues. 

The Intel Scalable System Framework: Kick-Starting the AI Revolution

Like many other HPC workloads, deep learning is a tightly coupled application that alternates between compute-intensive number-crunching and high-volume data sharing. Intel explores how the Intel Scalable System can act as a solution for a high performance computing platform that can run deep learning workloads and more. 

A Simpler Path to Reliable, Productive HPC

HPC is becoming a competitive requirement as high performance data analysis (HPDA) joins multi-physics simulation as table stakes for successful innovation across a growing range of industries and research disciplines. Yet complexity remains a very real hurdle for both new and experienced HPC users. Learn how new Intel products, including the Intel HPC Orchestrator, can work to simplify some of the complexities and challenges that can arise in high performance computing environments.