Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Wave Computing Launches TritonAI 64 Platform for High-Speed Inferencing

Today AI startup Wave Computing announced its new TritonAI 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.

Job of the Week: Computational Scientist at UT Southwestern Medical Center

The UT Southwestern Medical Center in Dallas is seeking a Computational Scientist in our Job of the Week. “The Computational Scientist will support faculty and students in adapting computational strategies to the specific features of the HPC infrastructure. The successful candidate will work with a range of systems and technologies such as compute cluster, parallel file systems, high speed interconnects, GPU-based computing and database servers.”

Podcast: Enterprises go HPC at GPU Technology Conference

In this podcast, the Radio Free HPC team looks at news from the GPU Technology Conference. “Dan has been attending GTC since well before it became the big and important conference that it is today. We get a quick update on what was covered: the long keynote, automotive and robotics, the Mellanox acquisition, how a growing fraction of enterprise applications will be AI.”

AMD Powers Corona Cluster for HPC Analytics at Livermore

Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.”

Berkeley Engineers build World’s Fastest Optical Switch Arrays

Engineers at the University of California, Berkeley have built a new photonic switch that can control the direction of light passing through optical fibers faster and more efficiently than ever. This optical “traffic cop” could one day revolutionize how information travels through data centers and high-performance supercomputers that are used for artificial intelligence and other data-intensive applications.

Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer

Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe. “Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community.”

GPUs Address Growing Data Needs for Finance & Insurance Sectors

A new whitepaper from Penguin Computing contends “a new era of supercomputing” has arrived — driven primarily by the emergence of graphics processing units or GPUs. The tools once specific to gaming are now being used by investment and financial services to gain greater insights and generate actionable data. Learn how GPUs are spurring innovation and changing how today’s finance companies address their data processing needs. 

Video: HPC Networking in the Real World

Jesse Martinez from Los Alamos National Laboratory gave this talk at the OpenFabrics Workshop in Austin. “High speed networking has become extremely important in the world of HPC. As parallel processing capabilities increases and storage solutions increase in capacity, the network must be designed and implemented in a way to keep up with these trends. LANL has a very diverse use of high speed fabrics within its environment, from the compute clusters, to the storage solutions. This keynote/introduction session to the Sys Admin theme at the workshop will focus on how LANL has made use of these diverse fabrics to optimize and simplify the notion of data movement and communication to obtain these results for scientists solving real world problems.”

Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer

Jack Wells from ORNL gave this talk at the GPU Technology Conference. “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

DOE Extending Quantum Networks for Long Distance Entanglement

Scientists from Brookhaven National Laboratory, Stony Brook University, and DOE’s Energy Sciences Network (ESnet) are collaborating on an experiment that puts U.S. quantum networking research on the international map. Researchers have built a quantum network testbed that connects several buildings on the Brookhaven Lab campus using unique portable quantum entanglement sources and an existing DOE ESnet communications fiber network—a significant step in building a large-scale quantum network that can transmit information over long distances.