Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

Data Center Transformation: Why a Workload-Driven and Scalable Architecture Matters

In this whitepaper QCT (Quanta Cloud Technology) highlights Data Center Transformation: Why a Workload-Driven and Scalable Architecture Matters. The company is offering its QCT Platform on Demand (QCT POD) solution that empowers enterprises to kickstart their transformation journey. It combines advanced technology with a unique user experience to help enterprises reach better performance and gain more insights. With flexibility and scalability, QCT POD enables enterprises to address a broader range of HPC, Deep Learning, and Data Analytic demands that fulfill various applications.

Progress and Challenges for the Use of Deep Learning to Improve Weather Forecasts

Peter Dueben from ECMWF gave this talk at the UK HPC Conference. “I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will then talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future.”

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

Designing Scalable HPC, Deep Learning, Big Data, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the UK HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, Big Data and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, and GPGPUs (including GPUDirect RDMA).”

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

Bright Computing Powers Deep Learning at University of North Dakota

The University of North Dakota is using Bright Computing software to unify its newly designed clustered infrastructure, giving them more versatility, access to the cloud and to deep learning resources. “Bright software makes it easy for us train new staff to deploy, provision, and manage our clustered infrastructure for HPC, Data Science, and Cloud; all integrated, and all from a single point of control. We can deploy Ceph nodes, install standard HPC products, roll out cloud features, introduce data science packages, all from a single interface with robust enterprise-grade support.”

Reflections on Deep Learning, DNNs, and AI on Wall Street

In this special guest feature, Bob Fletcher from Verne Global reflects on the recent HPC and AI on Wall Street conference. “Almost every organization at the event talked about their use of machine learning and some indicated what would make them extend it into full-scale deep learning. The most important criteria were the appropriateness of the DNN training techniques.”

Accelerating High-Resolution Weather Models with Deep-Learning Hardware

Sam Hatfield from the University of Oxford gave this talk at the PASC19 conference. “In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models.”

GigaIO Steps Up with PCIe Gen 4 Interconnect for HPC

In this video from ISC 2019, Marc Lehrer from GigaIO describes the company’s innovative HPC interconnect technology based on PCIe Gen 4. “For your most demanding workloads, you want time to solution. The GigaIO hyper-performance network breaks the constraints of old architectures, opening up new configuration possibilities that radically reduces system cost and protect your investment by enabling you to easily adopt new compute or business processes.”