Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. “An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”
HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms
“The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security.”
NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand
Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”
New InfiniBand Architecture Specifications Extend Virtualization Support
“As performance demands continue to evolve in both HPC and enterprise cloud applications, the IBTA saw an increasing need for new enhancements to InfiniBand’s network capabilities, support features and overall interoperability,” said Bill Magro, co-chair of the IBTA Technical Working Group. “Our two new InfiniBand Architecture Specification updates satisfy these demands by delivering interoperability and testing upgrades for EDR and FDR, flexible management capabilities for optimal low-latency and low-power functionality and virtualization support for better network scalability.”
University of Tokyo Selects Mellanox EDR InfiniBand
Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.
Mellanox Rolls Out EDR InfiniBand Routers
Today Mellanox announced a new line of InfiniBand router systems. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-center deployments as well as expanded capabilities for data center isolations between different users and applications. The network router delivers a consistent, high-performance and low latency router solution that is mission critical for high performance computing, cloud, Web 2.0, machine learning and enterprise applications.
IBTA Plugfest Expands EDR InfiniBand & RoCE Ecosystem
IBTA’s world-class compliance and interoperability program ensures the dependability of the evolving InfiniBand specification, which in turn broadens industry adoption and user confidence,” said Rupert Dance, co-chair of the IBTA Compliance and Interoperability Working Group (CIWG). “With the continued support of our members and partners, the IBTA is able to offer the industry invaluable resources to help guide critical decision making during deployment of InfiniBand or RoCE solutions.”
Rich Graham Presents: The Exascale Architecture
Rich Graham presented this talk at the Stanford HPC Conference. “Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”
VSC in Belgium Moves to EDR InfiniBand for Medical Research
Today Mellanox announced that the Flemish Supercomputer Center (VSC) in Belgium, has selected Mellanox’s end-to-end 100Gb/s EDR interconnect solutions to be integrated into a new LX-series supercomputer that is to be supplied by NEC Corporation. The system, which will be the fastest supercomputer (Tier-1) and the first complete end-to-end EDR 100Gb/s InfiniBand system in Belgium, is another example of the increasing global adoption of EDR InfiniBand technology.
Comparing FDR and EDR InfiniBand
Over at the Dell HPC Blog, Olumide Olusanya and Munira Hussain have posted an interesting comparison of FDR and EDR InfiniBand. “In the first post, we shared OSU Micro-Benchmarks (latency and bandwidth) and HPL performance between FDR and EDR Infiniband. In this part, we will further compare performance using additional real-world applications such as ANSYS Fluent, WRF, and NAS Parallel Benchmarks. In both blogs, we have shown several micro-benchmark and real-world application results to compare FDR with EDR Infiniband.”