Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: NVIDIA Launches Ampere Data Center GPU

In this video, NVIDIA CEO Jensen Huang announces the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100. Their fastest GPU ever is in now in full production and shipping to customers worldwide. “NVIDIA A100 GPU is a 20X AI performance leap and an end-to-end machine learning accelerator – from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”

New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics

Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. “DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects.”

Novel Liquid Cooling Technologies for HPC

In this special guest feature, Robert Roe from Scientific Computing World writes that increasingly power-hungry and high-density processors are driving the growth of liquid and immersion cooling technology. “We know that CPUs and GPUs are going to get denser and we have developed technologies that are available today which support a 500-watt chip the size of a V100 and we are working on the development of boiling enhancements that would allow us to go beyond that.”

TYAN Launches AI-Optimized Servers Powered by NVIDIA V100S GPUs

Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. “An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”

Podcast: ColdQuanta Serves Up Some Bose-Einstein Condensate

“ColdQuanta is headed by an old pal of ours Bo Ewald and has just come out of stealth mode into the glaring spotlight of RadioFreeHPC. When you freeze a gas of Bosons at low density to near zero you start to get macroscopic access to microscopic quantum mechanical effects, which is a pretty big deal. With the quantum mechanics start, you can control it, change it, and get computations out of it. The secret sauce for ColdQuanta is served cold, all the way down into the micro-kelvins and kept very locally, which makes it easier to get your condensate.”

NERSC Finalizes Contract for Perlmutter Supercomputer

NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]

Supercomputing the San Andreas Fault with CyberShake

With help from DOE supercomputers, a USC-led team expands models of the fault system beneath its feet, aiming to predict its outbursts. For their 2020 INCITE work, SCEC scientists and programmers will have access to 500,000 node hours on Argonne’s Theta supercomputer, delivering as much as 11.69 petaflops. “The team is using Theta “mostly for dynamic earthquake ruptures,” Goulet says. “That is using physics-based models to simulate and understand details of the earthquake as it ruptures along a fault, including how the rupture speed and the stress along the fault plane changes.”

NVIDIA Completes Acquisition of Mellanox

NVIDIA today announced the completion of its acquisition of Mellanox for a transaction value of $7 billion. “With Mellanox, the new NVIDIA has end-to-end technologies from AI computing to networking, full-stack offerings from processors to software, and significant scale to advance next-generation data centers. Our combined expertise, supported by a rich ecosystem of partners, will meet the challenge of surging global demand for consumer internet services, and the application of AI and accelerated data science from cloud to edge to robotics.”

New Weka AI framework to accelerate Edge to Core to Cloud Data Pipelines

Today WekaIO introduced Weka AI, a transformative storage solution framework underpinned by the Weka File System (WekaFS) that enables accelerated edge-to-core-to-cloud data pipelines. Weka AI is a framework of customizable reference architectures (RAs) and software development kits (SDKs) with leading technology alliances like NVIDIA, Mellanox, and others in the Weka Innovation Network (WIN). “GPUDirect Storage eliminates IO bottlenecks and dramatically reduces latency, delivering full bandwidth to data-hungry applications,” said Liran Zvibel, CEO and Co-Founder, WekaIO.

NVIDIA Receives Approval to Proceed with Mellanox Acquisition

Today NVIDIA announced that it has received approval from all necessary authorities to proceed with its planned acquisition of Mellanox, as announced in March 2019. “This exciting transaction would unite two HPC industry leaders and strengthen the combined company’s ability to create data-centric system architectures for the convergence of the HPC and hyperscale markets around AI and other HPDA tasks,” said Steve Conway from Hyperion Research.