Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


insideHPC Special Report: HPC and AI for the Era of Genomics

This special report sponsored by Dell Technologies, takes a deep dive into HPC and AI for life sciences in the era of genomics. The report also highlights a lineup of Ready Solutions created by Dell Technologies which are highly optimized and tuned hardware and software stacks for a variety of industries. The Ready Solutions for HPC Life Sciences have been designed to speed time to production, improve performance with purpose-built solutions, and scale easier with modular building blocks for capacity and performance.

Supercomputing Drug Screening for Deadly Heart Arrhythmias

Using XSEDE supercomputers, scientists have developed for the first time a way to screen drugs through their chemical structures for induced arrhythmias. Death from sudden cardiac arrest causes the most deaths by natural causes in the U.S. estimated at 325,000 per year. “Stampede 2 offered a large array of powerful multi-core CPU nodes, which we were able to efficiently use for dozens of molecular dynamics runs we had to do in parallel. Such efficiency and scalability rivaled and even exceeded other resources we used for those simulations including even GPU equipped nodes,” Vorobyov added.

Liqid, Dell, and AMD power Industry’s Fastest Single-socket Storage Server

Today Liqid announced that it has worked with industry leaders AMD and Dell Technologies to deliver one of the fastest one-socket storage rack servers on the market. “Liqid’s composable Gen-4 PCI-Express (PCIe) fabric technology, the LQD4500, is coupled with the AMD EPYC 7002 Series Processors, and enclosed in Dell Technologies’ industry-leading Dell EMC PowerEdge R7515 Rack Server to deliver an architecture designed for the most demanding next-generation, AI-driven HPC application environments.”

GCS Centres in Germany support COVID-19 research with HPC

Epidemiologists have turned to the power of supercomputers to model and predict how the disease spreads at local and regional levels in hopes of forecasting potential new hot spots and guiding policy makers’ decisions in containing the disease’s spread. GCS is supporting several projects focused on these goals. “”Our workflows are perfectly scalable in the sense that the number of calculations we can perform is directly proportional to the number of cores available.”

Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data.”

How HPC is aiding the fight against COVID-19

In this special guest feature, Dr. Rosemary Francis writes that HPC is playing a massive part in the fight against Covid-19 through modeling, genomics, and drug discovery. “Thanks to the work in labs and HPC centres around the world, we now know that the molecular mechanism of the SARS-CoV-2 entry is via a lock and key effect; a spike on the outside of the virus acts as a key to unlock an ACE2 receptor protein on the human cell.”

New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics

Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. “DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects.”

MemVerge Introduces Big Memory Computing

Today MemVerge introduced Big Memory Computing. This new category is sparking a revolution in data center architecture where all applications will run in memory. Big Memory Computing is the combination of DRAM, persistent memory and Memory Machine software technologies, where the memory is abundant, persistent and highly available. “With MemVerge’s Memory Machine technology and Intel’s Optane DC persistent memory, enterprises will be able to more efficiently and quickly gain insights from enormous amounts of data in near-real time.”

Quantum StorNext Makes Cloud Content More Accessible, Speeds Data Retrieval

Today Quantum Corp. announced new advancements for its StorNext file system and data management software designed to make cloud content more accessible, with significantly improved read and write speeds for any cloud and object store based storage solution. “We are working closely with our customers to innovate and enhance the capabilities of our StorNext file system,” said Ed Fiore, Vice President and General Manager, Primary Storage, Quantum. “At this time when customers are forced to work remotely, the flexibility to move content between locations, both on-premise and cloud datacenters, is critical. This latest version of StorNext software adds new ways to archive content and access it in the cloud and is another step toward providing a seamless bridge between on-premise and the cloud.”

Video: The Future of Quantum Computing with IBM

In this video, Dario Gil from IBM shares results from the IBM Quantum Challenge and describes how you can access and program quantum computers on the IBM Cloud today. “Those working in the Challenge joined all those who regularly make use of the 18 quantum computing systems that IBM has on the cloud, including the 10 open systems and the advanced machines available within the IBM Q Network. During the 96 hours of the Challenge, the total use of the 18 IBM Quantum systems on the IBM Cloud exceeded 1 billion circuits a day.”