Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Liqid Partners With ScaleMP to Introduce Liqid Memory

Today ScaleMP announced that it has partnered with Liqid Inc. to enable users of in-memory workloads to break beyond the server memory limitations and expand total system memory by order of magnitude over the DRAM installed, without any operating system or application modifications. “All types of businesses for which data performance translates directly to operational success can now leverage non-volatile memory at scale backed by strong economics.”

Exascale Computing Project Update

Doug Kothe from the Exascale Computing Project gave this talk at the HPC User Forum. “The Exascale Computing Project (ECP) is focused on accelerating the delivery of a capable exascale computing ecosystem that delivers 50 times more computational science and data analytic application power than possible with DOE HPC systems such as Titan (ORNL) and Sequoia (LLNL). With the goal to launch a US exascale ecosystem by 2021, the ECP will have profound effects on the American people and the world.”

IBM Brings World’s Largest Fleet of Quantum Computing Systems Online

Today, IBM announced the opening of the IBM Quantum Computation Center in New York State. “To meet growing demand for access to real quantum hardware, ten quantum computing systems are now online through IBM’s Quantum Computation Center. The fleet is now composed of five 20-qubit systems, one 14-qubit system, and four 5-qubit systems. Five of the systems now have a Quantum Volume of 16 – a measure of the power of a quantum computer – demonstrating a new sustained performance milestone.”

Aiden Lab Chooses WEKA to Accelerate Genomics Research

Today WekaIO announced that Aiden Lab at the Baylor College of Medicine, a leading genome research facility, has selected the Weka File System (WekaFS) to accelerate its genomics research. “WekaFS has delivered a 3x improvement in performance at Aiden Lab and is enabling it to use its cloud infrastructure more effectively. WekaFS will improve overall productivity and empower researchers to become more efficient at analyzing results.”

Exxact looks to BeeGFS Parallel Storage for HPC, AI, and Life Science Workloads

Today Exxact Corporation announced that it has become a North America Gold Partner of ThinkParQ, the company behind the BeeGFS parallel file system. “BeeGFS offers the usability, flexibility, and performance that our HPC and deep learning customers expect and depend on in a storage solution from us,” said Andrew Nelson, VP of Technology at Exxact Corporation. “Exxact HPC and deep learning clusters with BeeGFS will realize dramatic improvements in managing storage workloads and will handle I/O bandwidth issues with ease.”

New Liquid Cooled AMD EPYC 7H12 Processor Breaks Performance Records

Today AMD announced a new addition to the 2nd Generation AMD EPYC family, the AMD EPYC 7H12 processor. The 64 core/128 thread, 2.6Ghz base frequency, 3.3Ghz max boost frequency, 280W TDP processor is specifically built for HPC customers and workloads, using liquid cooling to deliver leadership supercomputing performance. “In an ATOS testing on their BullSequana XH2000, the new AMD EPYC 7H12 processor has set four new world-records in server performance. With a LINPACK score of ~ 4.2 TeraFLOPS, the new chip performs ~11% better than the AMD EPYC 7742 processor.”

DOE Funds Quantum Computing and Networking Research

Today, the U.S. Department of Energy (DOE) announced $60.7 million in funding to advance the development of quantum computing and networking. “We are on the threshold of a new era in Quantum Information Science and quantum computing and networking, with potentially great promise for science and society,” said Under Secretary of Science Paul Dabbar. “These projects will help ensure U.S. leadership in these important new areas of science and technology.”

Research Findings: HPC-Enabled AI

Steve Conway from Hyperion Research gave this talk at the HPC User Forum. “AI adds fourth branch to the scientific method, inferencing. Inferencing complements theory, experiments and established simulation methods. Essentially, inferencing is the ability to guess, based on incomplete information. At the same time, simulation is becoming much more data intensive with the rise of iterative methods. When inferencing is applied to data intensive simulation, the result is intelligent simulation.”

NSF Grant to help develop cyberinfrastructure across Midwest

The National Science Foundation has awarded a $1.4 million grant to a team of experts led by Timothy Middelkoop, assistant teaching professor of industrial and manufacturing systems engineering in the University of Missouri’s College of Engineering. The researchers said the grant will fill an emerging need by providing training and resources in high-performance computer systems. “There is a critical need for building cyberinfrastructure across the nation, including the Midwest region,” said Middelkoop, who also serves as the director of Research Computing Support Services in the Division of Information Technology at MU. “It is our job as cyberinfrastructure professionals to facilitate research and work with researchers as a team to identify the best practices.”

NVIDIA TensorRT 6 Breaks 10 millisecond barrier for BERT-Large

Today, NVIDIA released TensorRT 6, which includes new capabilities that dramatically accelerate conversational AI applications, speech recognition, 3D image segmentation for medical applications, as well as image-based applications in industrial automation. TensorRT is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for AI applications. “With today’s release, TensorRT continues to expand its set of optimized layers, provides highly requested capabilities for conversational AI applications, delivering tighter integrations with frameworks to provide an easy path to deploy your applications on NVIDIA GPUs. In TensorRT 6, we’re also releasing new optimizations that deliver inference for BERT-Large in only 5.8 ms on T4 GPUs, making it practical for enterprises to deploy this model in production for the first time.”