Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Microway Deploys NVIDIA DGX POD-based AI Supercomputer at MSOE

Microway recently deployed an NVIDIA DGX POD-based supercomputer for education and applied research at the Milwaukee School of Engineering (MSOE). Called “Rosie,” the supercomputer forms the centerpiece of the university’s new computer science program and will support an expansion of deep learning and AI education designed to permeate across the institution. “It features three racks of NVIDIA DGX-1 AI systems with NVIDIA V100 Tensor Core GPU accelerators; twenty Microway NumberSmasher Xeon + NVIDIA T4 GPU teaching compute nodes; and access to NGC.”

NVIDIA Powers Rosie Supercomputer at MSOE

An NVIDIA GPU-powered supercomputer named “Rosie” is at the heart of a new computational science facility at the Milwaukee School of Engineering. “Housed in a glass-walled area within the newly constructed four-story Diercks Hall, the new NVIDIA-powered AI supercomputer includes three NVIDIA DGX-1 pods, each with eight NVIDIA V100 Tensor Core GPUs, and 20 servers each with four NVIDIA T4 GPUs. The nodes are joined together by Mellanox networking fabric and share 200TB of network-attached storage. Rare among supercomputers in higher education, the system —which provides 8.2 petaflops of deep learning performance — will be used for teaching undergrad classes.”

Video: Unboxing the NVIDIA DGX-1 Supercomputer at Georgia Tech

In this video, Oded Green from NVIDIA unboxes a DGX-1 supercomputer at the College of Computing Data Center at Georgia Tech. “And while the DGX-1 arriving at Georgia Tech for student-use is exciting enough, there is cause for more celebration as a DGX Station also arrived this year as part of a new NVIDIA Artificial Intelligence Lab (NVAIL) grant awarded to CSE. The NVAIL grant focuses on developing multi-GPU graph analytics and the DGX station is constructed specifically for data science and artificial intelligence development.”

Red Hat Teams with NVIDIA to Accelerate Machine Learning in the Cloud

Today Red Hat announced it has deepened its alliance with NVIDIA to accelerate the enterprise adoption of AI, machine learning and data analytics workloads in production environments. To move thins along, Red Hat is launching an early access program for prospective customers. “High-performance technologies are moving at a brisk rate into enterprise data centers to accelerate product development and business operations – including financial services, ERP and sales analysis, fraud detection and cybersecurity, and machine learning-AI,” said Steve Conway, senior vice president of research, Hyperion Research. “The hybrid cloud solutions from Red Hat and NVIDIA are designed to make accelerated computing use easier for enterprises on-premise and in the cloud.”

Video: DDN Accelerates Ai, Analytics, and Deep Learning at GTC

In this video from the 2019 GPU Technology Conference, James Coomer from DDN describes the company’s high-speed storage solutions for Ai, Machine Learning, and HPC. “This week at GTC, DDN is showcasing its high speed storage solutions, including its A³I architecture and new customer use cases in autonomous driving, life sciences, healthcare, retail, and financial services. DDN next generation of A³I reference architectures include NVIDIA’s DGX POD, DGX-2, and the DDN’s AI400 parallel storage appliance.”

Lenovo HPC Clusters come to the Nimbix Cloud

Today HPC cloud provider Nimbix announced a new strategic partnership with Lenovo Data Center Group (DCG). “Lenovo DCG and Nimbix have teamed up to deliver flexible, powerful solutions based on Lenovo HPC clusters and the Nimbix Cloud. By bringing together Lenovo’s supercomputing expertise with JARVICE, the purpose-built, container-based, bare metal HPC Cloud platform from Nimbix, customers can tailor their hardware and software resources to meet their business requirements, no matter how demanding.”

Red Hat Powers NVIDIA DGX-1 for Ai Workloads

Today Red Hat announced it is collaborating with NVIDIA to bring a new wave of open innovation around emerging workloads like artificial intelligence, deep learning and data science to enterprise datacenters around the world. “Red Hat Enterprise Linux’s enablement of NVIDIA GPUs on our Sierra supercomputer provides commonality across our systems, greatly facilitating our users’ ability to exploit the world’s third fastest computer,” said Bronis Supinski, CTO of Livermore Computing.

Using Ai to detect Gravitational Waves with the Blue Waters Supercomputer

NASA researchers are using AI technologies to detect gravitational waves. The work is described in a new article in Physics Review D this month. “This article shows that we can automatically detect and group together noise anomalies in data from the LIGO detectors by using artificial intelligence algorithms based on neural networks that were already pre-trained to classify images of real-world objects,” said research scientist, Eliu Huerta.

NVIDIA GPUs Power Fujitsu AI Supercomputer at RIKEN in Japan

Fujitsu has posted news that their new AI supercomputer at RIKEN in Japan is already being used for AI research. Called RAIDEN (Riken AIp Deep learning ENvironment), the GPU-accelerated Fujitsu system sports 4 Petaflops of processing power. “The RAIDEN supercomputer is built around Fujitsu PRIMERGY RX 2530 M2 servers with and 24 NVIDIA DGX-1 systems. With 8 NVIDIA Tesla GPUs per chassis, the DGX-1 includes access to today’s most popular deep learning frameworks.”

Exxact Corporation offers NVIDIA DGX Station and DGX-1 for Deep Learning

Today Exxact Corporation announced that it will offer the new NVIDIA DGX Station and DGX-1 systems featuring the NVIDIA Tesla V100 data center GPUs based on the NVIDIA Volta architecture. “NVIDIA’s DGX portfolio is paving the way for a new era of computing,” said Jason Chen, Vice President of Exxact Corporation. “The performance of the new DGX Station and DGX-1 systems for AI and advanced analytics is unmatched, providing data scientists a complete hardware and software package for compute-intensive AI exploration.”