Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Intel, NSF Name Winners of Wireless Machine Learning Research Funding

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Here are the […]

Lenovo to deploy 17 Petaflop supercomputer at KIT in Germany

Today Lenovo announced a contract for a 17 petaflop supercomputer at Karlsruhe Institute of Technology (KIT) in Germany. Called HoreKa, the system will come online this Fall and will be handed over to the scientific communities by summer 2021. The procurement contract is reportedly on the order of EUR 15 million. “The result is an innovative hybrid system with almost 60.000 next-generation Intel Xeon Scalable Processor cores and 220 terabytes of main memory as well as 740 NVIDIA A100 Tensor Core GPUs. A non-blocking NVIDIA Mellanox InfiniBand HDR network with 200 GBit/s per port is used for communication between the nodes. Two Spectrum Scale parallel file systems offer a total storage capacity of more than 15 petabytes.”

MemVerge Introduces Big Memory Computing

Today MemVerge introduced Big Memory Computing. This new category is sparking a revolution in data center architecture where all applications will run in memory. Big Memory Computing is the combination of DRAM, persistent memory and Memory Machine software technologies, where the memory is abundant, persistent and highly available. “With MemVerge’s Memory Machine technology and Intel’s Optane DC persistent memory, enterprises will be able to more efficiently and quickly gain insights from enormous amounts of data in near-real time.”

Novel Liquid Cooling Technologies for HPC

In this special guest feature, Robert Roe from Scientific Computing World writes that increasingly power-hungry and high-density processors are driving the growth of liquid and immersion cooling technology. “We know that CPUs and GPUs are going to get denser and we have developed technologies that are available today which support a 500-watt chip the size of a V100 and we are working on the development of boiling enhancements that would allow us to go beyond that.”

Agenda Posted for OpenFabrics Virtual Workshop

The OpenFabrics Alliance (OFA) has opened registration for its OFA Virtual Workshop, taking place June 8-12, 2020. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address today’s challenges. “The OpenFabrics Alliance is committed to accelerating the development of high performance fabrics. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address challenges.”

TYAN Launches AI-Optimized Servers Powered by NVIDIA V100S GPUs

Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. “An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”

Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

Video: New Dell EMC PowerStore platform is performance-optimized with Intel Optane SSDs

Today Dell Technologies launched Dell EMC PowerStore, a modern infrastructure platform built from the ground up with superior technology and expertise to address the challenges of the data era. “PowerStore is seven times faster and three times more responsive than previous Dell EMC midrange storage arrays, because of its end-to-end NVMe design and support for Storage Class Memory as persistent storage powered by dual port Intel Optane SSDs.”

How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented.”

Video: Ayar Labs pushes Moore’s Law through Optical I/O technology

In this video, Mark Wade from Ayar Labs explains how the company’s optical I/O solution will address the critical computing challenges of efficiency, density, and distance for next-gen system architectures. “Our patented approach uses industry standard cost-effective silicon processing techniques to develop high speed, high density, low power optical based interconnect “chiplets” and multi-wavelength lasers to replace traditional electrical based I/O.”