Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


UMass Dartmouth Speeds Research with Hybrid Supercomputer from Microway

UMass Dartmouth’s powerful new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously, with over 85% more total memory and over four times the aggregate memory bandwidth. “The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads. “Over 50 nodes include Intel Xeon Scalable Processors, DDR4 memory, SSDs and Mellanox ConnectX-5 EDR 100Gb InfiniBand. A subset of systems also feature NVIDIA V100 GPU accelerators. Equally important are a second subset of POWER9 with 2nd Generation NVLink- based- IBM Power Systems AC922 Compute nodes.”

Time-lapse Video of Big Red 200 Cray Supercomputer at Indiana University

In this video, technicians install the Big Red 200 supercomputer at Indiana University. IU is the first university to deploy a Cray Shasta system, the Cray Slingshot interconnect and Cray Urika AI Suite for Shasta, providing its engineers, researchers and scientists powerful resources for the next era of computing. The new supercomputer will be instrumental in the University’s exploration and advancement of AI in education, cybersecurity, medicine, environmental science and more.

Podcast: Spell startup looks to bring AI to the people

In this AI Podcast, Serkan Piantino from Spell describes how his company is making machine learning easier. “We want to empower and transform the global workforce by making deep learning and artificial intelligence accessible to everyone. We believe that as organizations and individuals can harness the power of machine learning, our world will change quickly. Our mission is to make sure the technology driving this change is not mysterious and locked away but open and available for everyone.”

OSS PCI Express 4.0 Expansion System does AI on the Fly with Eight GPUs

Today One Stop Systems (OSS) announced the availability of a new OSS PCIe 4.0 value expansion system incorporating up to eight of the latest NVIDIA V100S Tensor Core GPU. As the newest member of the company’s AI on the Fly product portfolio, the system delivers data center capabilities to HPC and AI edge deployments in the field or for mobile applications. “The 4U value expansion system adds massive compute capability to any Gen 3 or Gen 4 server via two OSS PCIe x16 Gen 4 links. The links can support an unprecedented 512 Gpbs of aggregated bandwidth to the GPU complex.”

NVIDIA and Arm look to accelerate HPC Worldwide

In this video, NVIDIA’s Duncan Poole and Arm’s David Lecomber explain how the two company’s accelerate the world’s fastest supercomputers. “At SC19, NVIDIA introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm-based servers, driving a new era of high performance computing for a growing range of applications in science and industry. The reference design platform — consisting of hardware and software building blocks — responds to growing demand in the HPC community to harness a broader range of CPU architectures.”

Podcast: AI4Good Lab Empowers Women in Computer Science

In this AI Podcast, Doina Precup describes why their doesn’t need to be a gender gap in computer science education. An associate professor at McGill University and research team lead at DeepMind, Precup shares her personal experiences, along with the AI4Good Lab she co-founded to give women more access to machine learning training.

ORNL Tests Arm-based Wombat Platform with NVIDIA GPUs

Researchers at ORNL are trying out their HPC codes on Wombat, a test bed cluster based on production Marvell ThunderX2 CPUs and NVIDIA V100 GPUs. The small cluster provides a platform for testing NVIDIA’s new CUDA software stack purpose-built for Arm CPU systems. “Eight teams successfully ported their codes to the new system in the days leading up to SC19. In less than 2 weeks, eight codes in a variety of scientific domains were running smoothly on Wombat.”

NVIDIA DGX SuperPOD: Instant Infrastructure for AI Leadership

Darrin Johnson from NVIDIA gave this talk at the DDN User Group. “The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure that delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world’s most challenging AI problems. “When combined with DDN’s A3I data management solutions, NVIDIA DGX SuperPOD creates a real competitive advantage for customers looking to deploy AI at scale.”

Microway Deploys NVIDIA DGX-2 supercomputers at Oregon State University

Microway has deployed six NVIDIA DGX-2 supercomputer systems at Oregon State University. As an NVIDIA Partner Network HPC Partner of the Year, Microway installed the DGX-2 systems, integrated software, and transferred their extensive AI operational knowledge to the University team. “The University selected the NVIDIA DGX-2 platform for its immense power, technical support services, and the Docker images with NVIDIA’s NGC containerized software. Each DGX-2 system delivers an unparalleled 2 petaFLOPS of AI performance.”

Joe Landman on How the Cloud is Changing HPC

In this special guest feature, Joe Landman from Scalability.org writes that the move to cloud-based HPC is having some unexpected effects on the industry. “When you purchase a cloud HPC product, you can achieve productivity in time scales measurable in hours to days, where previously weeks to months was common. It cannot be overstated how important this is.”