Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


DOE Funds Quantum Computing and Networking Research

Today, the U.S. Department of Energy (DOE) announced $60.7 million in funding to advance the development of quantum computing and networking. “We are on the threshold of a new era in Quantum Information Science and quantum computing and networking, with potentially great promise for science and society,” said Under Secretary of Science Paul Dabbar. “These projects will help ensure U.S. leadership in these important new areas of science and technology.”

NSF Grant to help develop cyberinfrastructure across Midwest

The National Science Foundation has awarded a $1.4 million grant to a team of experts led by Timothy Middelkoop, assistant teaching professor of industrial and manufacturing systems engineering in the University of Missouri’s College of Engineering. The researchers said the grant will fill an emerging need by providing training and resources in high-performance computer systems. “There is a critical need for building cyberinfrastructure across the nation, including the Midwest region,” said Middelkoop, who also serves as the director of Research Computing Support Services in the Division of Information Technology at MU. “It is our job as cyberinfrastructure professionals to facilitate research and work with researchers as a team to identify the best practices.”

Gadi – Australia’s Newest Supercomputer

Allan Williams from NCI gave this talk at the Perth HPC Conference. “With 3,200 nodes, Gadi will power some of Australia’s most crucial research, seeking to solve some of the most complex and pressing challenges facing the world currently. Researchers from organizations including the CSIRO, Geosciences Australia, and the Bureau of Meteorology will benefit from faster speeds and higher capacity compared to the existing supercomputer.”

Mellanox to Ship 1 Million ConnectX Adapters in Q3 2019

Today Mellanox announced it is on track to ship over one million ConnectX and BlueField Ethernet network adapters in Q3 2019, a new quarterly record. “Leading data centers worldwide select the award-winning ConnectX and BlueField SmartNICs, to leverage networking speeds of 25, 50, 100, and 200 Gb/s, and take advantage of advanced offload capabilities to accelerate networking, virtualization, storage and security tasks alike — freeing up server CPUs for money-making applications.”

Video: The Cray Shasta Architecture

In this video from the HPC User Forum at Argonne, Steve Scott from Cray presents: The Cray Shasta Architecture. The DOE has selected the Shasta architecture to power all three of their planned Exascale systems coming to Argonne, ORNL, and LLNL. “Shasta allows for multiple processor and accelerator architectures and a choice of system interconnect technologies, including our new Cray-designed and developed interconnect we call Slingshot.”

Guardicore and Mellanox to Deliver Agentless and High-Performance Micro-Segmentation in Data Centers

Today Guardicore announced that it has partnered with Mellanox to deliver the first agentless and high-performance, low latency micro-segmentation solution for high speed 10G-100G networks. The solution leverages both the Guardicore Centra security platform and Mellanox BlueField SmartNIC solutions to provide customers with hardware-embedded micro-segmentation security. This integration allows customers using BlueField SmartNICs to support micro-segmentation requirements for high speed networks or when other agent-based solutions cannot be used. The new solution is fully integrated and managed centrally by Guardicore Centra.

Mellanox Rolls Out New LinkX 200G & 400G Cables & Transceivers

Today Mellanox announced new LinkX 100/200/400G cables and transceivers at the China International Optoelectronic Expo (CIOE) September 4th in Shenzhen, China and the European Convention for Optical Communications (ECOC) Sept 21st in Dublin, Ireland. “We’ve had tremendous adoption of our full line of LinkX 25/50/100G cables and transceivers with web-scale, cloud computing, and OEM customers in China and worldwide,” said, Steen Gundersen, vice president LinkX interconnects, Mellanox Technologies. “We are just at the beginning of the transition to 200G and 400G will soon follow. Customers select Mellanox because of our expertise in high-speed interconnects, our capacity to ship in volume, and the high quality of our products.”

NVIDIA Powers GRC Immersion Cooled System at TACC

Today GRC announced its joint project with NVIDIA to help power a GPU-intensive computing subsystem for TACC’s Frontera Supercomputer, the world’s largest academic supercomputer. GRC is proud of its long history with TACC and we’re delighted to have been able to collaborate once again with NVIDIA to help power the next generation of academic research,” […]

IBTA Celebrates 20 Years of Growth and Industry Success

“This year, the IBTA is celebrating 20 years of growth and success in delivering these widely used and valued technologies to the high-performance networking industry. Over the past two decades, the IBTA has provided the industry with technical specifications and educational resources that have advanced a wide range of high-performance platforms. InfiniBand and RoCE interconnects are deployed in the world’s fastest supercomputers and continue to significantly impact future-facing applications such as Machine Learning and AI.”

Video: Mellanox Rolls Out SmartNICs

In this video, Mellanox CTO Michael Kagan talks about the next step for SmartNICs and the company’s newly released ConnectX-6 Dx product driven by its own silicon. “The BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/s.”