Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

insideHPC Special Report Optimize Your WRF Applications – Part 3

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Dell Technologies HPC Community Interview: Bob Wisniewski, Intel’s Chief HPC Architect, Talks Aurora and Getting to Exascale

We’re recognizing that HPC is expanding to include AI. But it’s not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments – it’s all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

insideHPC Special Report: HPC and AI for the Era of Genomics – Part 3

This special report sponsored by Dell Technologies, takes a deep dive into HPC and AI for life sciences in the era of genomics. The report also highlights a lineup of Ready Solutions created by Dell Technologies which are highly optimized and tuned hardware and software stacks for a variety of industries. The Ready Solutions for HPC Life Sciences have been designed to speed time to production, improve performance with purpose-built solutions, and scale easier with modular building blocks for capacity and performance.

San Diego Supercomputer Center Leverages Bright Cluster Manager in New Expanse Supercomputer

Bright Computing, a global leader in Linux Cluster automation and management software for HPC and machine learning, announced that the San Diego Supercomputer Center (SDSC) at the University of California San Diego will be using Bright Cluster Manager to manage the facility’s newest supercomputer, called ‘Expanse’. The Bright Cluster Manager software platform will enable Expanse to balance and manage resource diversity across virtually all domains of their science and engineering users, maximizing resource utilization and increasing workload efficiency for research scientists across the country and beyond.

Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data.”

AMD Wins Slot in Latest NVIDIA A100 Machine Learning System

Today AMD demonstrated continued momentum in HPC with NVIDIA’s announcement that 2nd Generation AMD EPYC 7742 processors will power their new DGX A100 dedicated AI and Machine Learning system. AMD has an impressive set of HPC wins in the past year, and has been chosen by the DOE to power two pending exascale-class supercomputers, Frontier and El Capitan. “2nd Gen AMD EPYC processors are the first and only current x86-architecture server processor supporting PCIe 4.0, providing up to 128 lanes of I/O, per processor for high performance computing and connections to other devices like GPUs.”

New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics

Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. “DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects.”

Video: The Future of Quantum Computing with IBM

In this video, Dario Gil from IBM shares results from the IBM Quantum Challenge and describes how you can access and program quantum computers on the IBM Cloud today. “Those working in the Challenge joined all those who regularly make use of the 18 quantum computing systems that IBM has on the cloud, including the 10 open systems and the advanced machines available within the IBM Q Network. During the 96 hours of the Challenge, the total use of the 18 IBM Quantum systems on the IBM Cloud exceeded 1 billion circuits a day.”

New Gordon Bell Special Prize announced for HPC-Based COVID-19 Research

Today ACM announced the inception of the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. The new award will be presented in 2020 and 2021 and will recognize outstanding research achievements that use high performance computing applications to understand the COVID-19 pandemic, including the understanding of its spread. Nominations will be selected based on performance and innovation in their computational methods, in addition to their contributions toward understanding the nature, spread and/or treatment of the disease.

Using AI to Identify Brain Tumors with Federated Learning

Researchers at Intel Labs and the Perelman School of Medicine are using privacy-preserving technique called federated learning to train AI models that identify brain tumors. With federated learning, research institutions can collaborate on deep learning projects without sharing patient data. “AI shows great promise for the early detection of brain tumors, but it will require more data than any single medical center holds to reach its full potential,” said Jason Martin, principal engineer at Intel Labs.