Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Brookhaven’s Advanced Computing Lab Expands with DDN A3I Storage and Nvidia DGX-2

High performance storage vendor DDN, provider of AI and data management software and hardware solutions, today announced Brookhaven National Laboratory has selected DDN’s A3I AI400X all-NVME flash appliance storage for fast optimum experimental design for its Computational Science Initiative (CSI).  Brookhaven National Laboratory plans to use DDN products along with the Nvidia DGX-2 AI supercomputer to expand its project […]

vScaler Launches AI Reference Architecture

A new AI reference architecture from vScaler describes how to simplify the configuration and management of software and storage in a cost-effective and easy to use environment. “vScaler – an optimized cloud platform built with AI and Deep Learning workloads in mind – provides you with a production ready environment with integrated Deep Learning application stacks, RDMA accelerated fabric and optimized NVMe storage, eliminating the administrative burden of setting up these complex AI environments manually.”

Microway Deploys NVIDIA DGX-2 supercomputers at Oregon State University

Microway has deployed six NVIDIA DGX-2 supercomputer systems at Oregon State University. As an NVIDIA Partner Network HPC Partner of the Year, Microway installed the DGX-2 systems, integrated software, and transferred their extensive AI operational knowledge to the University team. “The University selected the NVIDIA DGX-2 platform for its immense power, technical support services, and the Docker images with NVIDIA’s NGC containerized software. Each DGX-2 system delivers an unparalleled 2 petaFLOPS of AI performance.”

OSU Invests $2.6 million in AI Computing Resources

Oregon State University’s College of Engineering is accelerating its work in artificial intelligence, robotics, driverless vehicles and other areas by acquiring six advanced NVIDIA systems that give the college some of the most powerful computing resources among universities worldwide. “The computing power we now possess will accelerate our research in artificial intelligence and machine learning, while exposing our computer science students to the most advanced technology available in higher education.”

Microway Deploys NVIDIA DGX-2 Supercomputer at Clemson University

Today Microway announced the company has shipped a NVIDIA DGX-2 supercomputer to Clemson University. “The NVIDIA DGX-2 delivers industry-leading 2 petaFLOPS of AI deep learning performance. The system harnesses the power of 16 NVIDIA Tesla V100 GPUs, fully interconnected with the enhanced-bandwidth NVIDIA NVLink interface to boost the speed of deep learning training.”

Red Hat Teams with NVIDIA to Accelerate Machine Learning in the Cloud

Today Red Hat announced it has deepened its alliance with NVIDIA to accelerate the enterprise adoption of AI, machine learning and data analytics workloads in production environments. To move thins along, Red Hat is launching an early access program for prospective customers. “High-performance technologies are moving at a brisk rate into enterprise data centers to accelerate product development and business operations – including financial services, ERP and sales analysis, fraud detection and cybersecurity, and machine learning-AI,” said Steve Conway, senior vice president of research, Hyperion Research. “The hybrid cloud solutions from Red Hat and NVIDIA are designed to make accelerated computing use easier for enterprises on-premise and in the cloud.”

Video: DDN Accelerates Ai, Analytics, and Deep Learning at GTC

In this video from the 2019 GPU Technology Conference, James Coomer from DDN describes the company’s high-speed storage solutions for Ai, Machine Learning, and HPC. “This week at GTC, DDN is showcasing its high speed storage solutions, including its A³I architecture and new customer use cases in autonomous driving, life sciences, healthcare, retail, and financial services. DDN next generation of A³I reference architectures include NVIDIA’s DGX POD, DGX-2, and the DDN’s AI400 parallel storage appliance.”

Nvidia Certifies Colovore As DGX-Ready Data Center Partner

NVIDIA has gained traction in datacenter Machine Learning with their DGX platforms. Now Bay Area provider Colovore has signed up as a colocation partner supporting NVIDIA DGX deployments. “NVIDIA’s DGX-1 and DGX-2 platforms are leading the way in solving complex AI challenges and we are proud to partner with NVIDIA and their customers to provide the most cost-effective, flexible, and scalable data center home for these servers. With close to 1,000 DGX platforms already deployed and operating at Colovore, we have tremendous experience providing the optimal footprint for DGX and HPC infrastructure success.”

NVIDIA Powers New Performance Records on TOP500 List

Today NVIDIA showcased its HPC leadership in the TOP500 list of the world’s fastest supercomputers. The closely watched list shows a 48 percent jump in one year in the number of systems using NVIDIA GPU accelerators. The total climbed to 127 from 86 a year ago, and is three times greater than five years ago. “With the end of Moore’s Law, a new HPC market has emerged, fueled by new AI and machine learning workloads. These rely as never before on our high performance, highly efficient GPU platform to provide the power required to address the most challenging problems in science and society.”

vScaler Cloud Adopts RAPIDS Open Source Software for Accelerated Data Science

vScaler has incorporated NVIDIA’s new RAPIDS open source software into its cloud platform for on-premise, hybrid, and multi-cloud environments. Deployable via its own Docker container in the vScaler Cloud management portal, the RAPIDS suite of software libraries gives users the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. “The new RAPIDS library offers Python interfaces which will leverage the NVIDIA CUDA platform for acceleration across one or multiple GPUs. RAPIDS also focuses on common data preparation tasks for analytics and data science. This includes a familiar DataFrame API that integrates with a variety of machine learning algorithms for end-to-end pipeline accelerations without paying typical serialization costs. RAPIDS also includes support for multi-node, multi-GPU deployments, enabling vastly accelerated processing and training on much larger dataset sizes.”