Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


New, Open DPC++ Extensions Complement SYCL and C++

In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

Lenovo Launches ThinkSystem Servers with GPU Support, Increased NVMe Storage

Lenovo this morning launched two new ThinkSystem servers, the SR860 V2 and SR850 V2, utilizing 3rd Gen Intel Xeon Scalable processors with Intel Deep Learning Boost, along with introduction of GPU support on the SR860 V2 (four double-wide 300W or eight single-wide GPUs). The servers also offer increased NVMe storage capacity for handling AI workloads, high end VDI deployments and data analytics.

Interview: Mark Papermaster, CTO and EVP, Technology and Engineering, AMD

In this interview, Mark Papermaster, CTO and EVP, Technology and Engineering from AMD describes the company’s presence in the HPC space along with new trends in the industry. At a higher level, Mark also offers his views of the semiconductor industry in general as well as areas of innovation that AMD plans to cultivate. The discussion then turns to the exascale era of computing.

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.

Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks

This white paper by Advantech, “Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks” discusses how the fast-changing needs of self-driving trucks are forcing compute platforms to evolve. Advantech and Crystal Group are teaming up to power that evolution based on AV trends, compute requirements, and a rugged COTS philosophy converging for breakthrough innovation in self-driving truck designs.

Intelligent Video Analytics Pushes Demand for High Performance Computing at the Edge

In this special guest feature, Tim Miller, VP of Product Marketing at One Stop Systems (OSS), writes that his company is addressing the common requirements for video analytic applications with its AI on the Fly® building blocks. AI on the Fly is defined as moving datacenter levels of HPC and AI compute capabilities to the edge.

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.

Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

A Liquid Cooling Petascale Supercomputing Site and GROMACS Workload Optimization Benchmark

Accelerated computing have been viewed as revolutionary breakthrough technologies for AI and HPC workloads. Significant accelerated computing power from GPUs paired with CPUs is the major contributor. Our friends over at Quanta Cloud Technology (QCT) provide QuantaGrid D52G-4U 8 NVLink GPUs servers with a liquid cooling platform successfully adopted by the National Center of High-performance Computing (NCHC) in Taiwan) for their Taiwania-II project. Rank 23rd on the Top500 as of June 2019.

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.