Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Mellanox Ethernet Accelerates Baidu Machine Learning

Today Mellanox announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning […]

HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms

“The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security.”

Penguin Computing Releases Scyld ClusterWare 7

“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”

HPC Advisory Council Announces Global Conference Series for 2017

Today the HPC Advisory Council announced key dates for its 2017 international conference series in the USA and Switzerland. The conferences are designed to attract community-wide participation, industry leading sponsors and subject matter experts. “HPC is constantly evolving and reflects the driving force behind many medical, industrial and scientific breakthroughs using research that harnesses the power of HPC and yet, we’ve only scratched the surface with respect to exploiting the endless opportunities that HPC, modeling, and simulation present,” said Gilad Shainer, chairman of the HPC Advisory Council. “The HPCAC conference series presents a unique opportunity for the global HPC community to come together in an unprecedented fashion to share, collaborate, and innovate our way into the future.”

Mellanox 25G/100G Ethernet Speeds Speech Recognition at iFLYTEK

Today Mellanox announced that one of China’s leading intelligent speech and language technologies’ companies, iFLYTEK, has chosen Mellanox’s end-to-end 25G and 100G Ethernet solutions based on ConnectX adapters and Spectrum switches for their next generation machine learning center. The partnership between Mellanox and iFLYTEK will enable iFLYTEK to achieve a high speech recognition rate of 97 percent.

NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand

Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”

Video: Building the Owens Cluster at OSC

In this time-lapse video, engineers build the Owens cluster at the Ohio Supercomputing Center. “Named after Olympic track star Jesse Owens, the new Owens Cluster is be powered by Dell PowerEdge servers featuring the new Intel Xeon processor E5-2600 v4 product family, include storage components manufactured by DDN and an EDR interconnect provided by Mellanox. The center earlier had acquired NetApp software and hardware for home directory storage.”

What’s Next for HPC? A Q&A with Michael Kagan, CTO of Mellanox

As an HPC technology vendor, Mellanox is in the business of providing the leading-edge interconnects that drive many of the world’s fastest supercomputers. To learn more about what’s new for SC16, we caught up with Michael Kagan, CTO of Mellanox. “Moving InfiniBand beyond EDR to HDR is critical not only for HPC, but also for the numerous industries that are adopting AI and Big Data to make real business sense out the amount of data available and that we continue to collect on a daily basis.”

Mellanox Brings HDR to SC16 while Dominating Today’s TOP500

“InfiniBand’s advantages of highest performance, scalability and robustness enable users to maximize their data center return on investment. InfiniBand was chosen by far more end-users compared to a proprietary offering, resulting in a more than 85 percent market share. We are happy to see our open Ethernet adapter and switch solutions enable all of the 40G and the first 100G Ethernet systems on the TOP500 list, resulting in overall 194 systems using Mellanox for their compute and storage connectivity.”

Radio Free HPC Reviews the New TOP500

The new TOP500 list is out, and Rad is Free HPC is here podcasting the scoop in their own special way. With two new systems in the TOP10, there are many different perspectives to share. “The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), slipped into the number 5 slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which recorded a Linpack mark of 13.6 petaflops.”