Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GPUs Accelerate Population Distribution Mapping Around the Globe

With the Earth’s population at 7 billion and growing, understanding population distribution is essential to meeting societal needs for infrastructure, resources and vital services. This article highlights how NVIDIA GPU-powered AI is accelerating mapping and analysis of population distribution around the globe. “If there is a disaster anywhere in the world,” said Bhaduri, “as soon as we have imaging we can create very useful information for responders, empowering recovery in a matter of hours rather than days.”

The AI Revolution: Unleashing Broad and Deep Innovation

For the AI revolution to move into the mainstream, cost and complexity must be reduced, so smaller organizations can afford to develop, train and deploy powerful deep learning applications. It’s a tough challenge. The following guest article from Intel explores how businesses can optimize AI applications and integrate them with their traditional workloads. 

Intel Parallel Studio XE 2018 Released

Intel has announced the release of Intel® Parallel Studio XE 2018, with updated compilers and developer tools. It is now available for downloading on a 30-day trial basis. ” This week’s formal release of the fully supported product is notable with new features that further enhance the toolset for accelerating HPC applications.”

Solving AI Hardware Challenges

For many deep learning startups out there, buying AI hardware and a large quantity of powerful GPUs is not feasible. So many of these startup companies are turning to cloud GPU computing to crunch their data and run their algorithms. Katie Rivera, of One Stop Systems, explores some of the AI hardware challenges that can arise, as well as the new tools designed to tackle these issues. 

The Internet of Things and Tuning

“Understanding how the pipeline slots are being utilized can greatly increase the performance of the application. If pipeline slots are blocked for some reason, performance will suffer. Likewise, getting an understanding of the various cache misses can lead to a better organization of the data. This can increase performance while reducing latencies of memory to CPU.”

The Intel Scalable System Framework: Kick-Starting the AI Revolution

Like many other HPC workloads, deep learning is a tightly coupled application that alternates between compute-intensive number-crunching and high-volume data sharing. Intel explores how the Intel Scalable System can act as a solution for a high performance computing platform that can run deep learning workloads and more. 

TensorFlow Deep Learning Optimized for Modern Intel Architectures

Researchers at Google and Intel recently collaborated to extract the maximum performance from Intel® Xeon and Intel® Xeon Phi processors running TensorFlow*, a leading deep learning and machine learning framework. This effort resulted in significant performance gains and leads the way for ensuring similar gains from the next generation of products from Intel. Optimizing Deep Neural Network (DNN) models such as TensorFlow presents challenges not unlike those encountered with more traditional High Performance Computing applications for science and industry.

A Simpler Path to Reliable, Productive HPC

HPC is becoming a competitive requirement as high performance data analysis (HPDA) joins multi-physics simulation as table stakes for successful innovation across a growing range of industries and research disciplines. Yet complexity remains a very real hurdle for both new and experienced HPC users. Learn how new Intel products, including the Intel HPC Orchestrator, can work to simplify some of the complexities and challenges that can arise in high performance computing environments. 

Internode Programming With MPI and Intel Xeon Phi Processor

“While MPI was originally developed for general purpose CPUs and is widely used in the HPC space in this capacity, MPI applications can also be developed and then deployed with the Intel Xeon Phi Processor. With the understanding of the algorithms that are used for a specific application, tremendous performance can be achieved by using a combination of OpenMP and MPI.”

Unlike Oil and Water, Legacy and Cloud Mix Well

Despite the cloud hype, legacy HPC apps are alive and well. While it may seem like they can’t mix, the process of bursting these applications to the cloud is bringing these staples to cloud table. Avoiding rewrites can efficiently bring immediate HPC cloud benefits to organizations big and small. “Many technologies and solutions now available allow for the functional and highly efficient coordination and connection between legacy applications and the well-known advantages of the cloud.”