Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supermicro steps up with NVIDIA A100 GPU-Powered Systems

Today Supermicro announced two new AI systems based on NVIDIA A100 GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. “Optimized for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs. The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand. The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.”

Video: NVIDIA Launches Ampere Data Center GPU

In this video, NVIDIA CEO Jensen Huang announces the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100. Their fastest GPU ever is in now in full production and shipping to customers worldwide. “NVIDIA A100 GPU is a 20X AI performance leap and an end-to-end machine learning accelerator – from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”

New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics

Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. “DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects.”

AMD Rolls out Radeon Pro VII Workstation Graphics Card

Today announced the AMD Radeon Pro VII workstation graphics card for broadcast and engineering professionals, delivering exceptional graphics and computational performance, as well as innovative features. The new graphics card is designed to power today’s most demanding broadcast and media projects, complex computer aided engineering (CAE) simulations and the development of HPC applications that enable scientific discovery on AMD-powered supercomputers.

Podcast: Streamlined Data Science through Jupyter Lab and Jupyter Enterprise Gateway

“Jupyter is a free, open-source, interactive web tool known as a computational notebook, which researchers can use to combine software code, computational output, explanatory text and multimedia resources in a single document. This podcast looks at how the Bright Jupyter integration makes it easy for customers to use Bright for Data Science through JupyterLab notebooks, and allows users to run their notebooks through a supported HPC scheduler, Kubernetes, or on the server running JupyterHub.”

Novel Liquid Cooling Technologies for HPC

In this special guest feature, Robert Roe from Scientific Computing World writes that increasingly power-hungry and high-density processors are driving the growth of liquid and immersion cooling technology. “We know that CPUs and GPUs are going to get denser and we have developed technologies that are available today which support a 500-watt chip the size of a V100 and we are working on the development of boiling enhancements that would allow us to go beyond that.”

TYAN Launches AI-Optimized Servers Powered by NVIDIA V100S GPUs

Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. “An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”

A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations

Alexandros Ziogas from ETH Zurich gave this talk at Supercomputing Frontiers Europe. “The computational efficiency of a state of the art ab initio #quantum transport (QT) solver, capable of revealing the coupled electro-thermal properties of atomically-resolved nano-transistors, has been improved by up to two orders of magnitude through a data centric reorganization of the application. The approach yields coarse-and fine-grained data-movement characteristics that can be used for performance and communication modeling, communication-avoidance, and dataflow transformations.”

NERSC Finalizes Contract for Perlmutter Supercomputer

NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]

The Incorporation of Machine Learning into Scientific Simulations at LLNL

Katie Lewis from Lawrence Livermore National Laboratory gave this talk at the Stanford HPC Conference. “Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness.”