Mellanox Ethernet Accelerates Baidu Machine Learning

Today Mellanox announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning […]

Video: Azure High Performance Computing

“Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs.”

Call for Papers: International Workshop on High-Performance Big Data Computing (HPBDC)

The 3rd annual International Workshop on High-Performance Big Data Computing (HPBDC) has issued its Call for Papers. Featuring a keynote by Prof. Satoshi Matsuoka from Tokyo Institute of Technology, the event takes place May 29, 2017 in Orlando, FL.

HIP and CAFFE Porting and Profiling with AMD’s ROCm

In this video from SC16, Ben Sander from AMD presents: HIP and CAFFE Porting and Profiling with AMD’s ROCm. “We are excited to present ROCm, the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent. We are bringing the UNIX philosophy of choice, minimalism and modular software development to GPU computing. The new ROCm foundation lets you choose or even develop tools and a language run time for your application. ROCm is built for scale; it supports multi-GPU computing in and out of server-node communication through RDMA.”

InfiniBand: When State-of-the-Art becomes State-of-the-Smart

Scot Schultz from Mellanox writes that the company is moving the industry forward to a world-class off-load network architecture that will pave the way to Exascale. “Mellanox, alongside many industry thought-leaders, is a leader in advancing the Co-Design approach. The key value and core goal is to strive for more CPU offload capabilities and acceleration techniques while maintaining forward and backward compatibility of new and existing infrastructures; and the result is nothing less than the world’s most advanced interconnect, which continues to yield the most powerful and efficient supercomputers ever deployed.”

Mellanox Shipping ConnectX-5 Adapter

“We are pleased to start shipping the ConnectX-5, the industry’s most advanced network adapter, to our key partners and customers, allowing them to leverage our smart network architecture to overcome performance limitations and to gain a competitive advantage,” said Eyal Waldman, Mellanox president and CEO. “ConnectX-5 enables our customers and partners to achieve higher performance, scalability and efficiency of their InfiniBand or Ethernet server and storage platforms. Our interconnect solutions, when combined with Intel, IBM, NVIDIA or ARM CPUs, allow users across the world to achieve significant better return on investment from their IT infrastructure.”

Supermicro Rolls Out New Servers with Tesla P100 GPUs

“Our high-performance computing solutions enable deep learning, engineering, and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot, and per dollar,” said Charles Liang, President and CEO of Supermicro. “With our latest innovations incorporating the new NVIDIA P100 processors in a performance and density optimized 1U and 4U architectures with NVLink, our customers can accelerate their applications and innovations to address the most complex real world problems.”

New Mellanox Networking Solutions Accelerate NVMe Over Fabrics

“We’ve seen the rapid evolution of SSDs and have been contributing to the NVMe over Fabrics standard and community drivers,” said Michael Kagan, CTO at Mellanox Technologies. “Because faster storage requires faster networks, we designed the highest-speeds and most intelligent offloads into both our ConnectX-5 and BlueField families. This lets us connect many SSDs directly to the network at full speed, without the need to dedicate many CPU cores to managing data movement, and we provide a complete end-to-end networking solution with the highest-performing 25, 50, and 100GbE switches and cables as well.”

Mellanox Enhances RoCE Software to Simplify RDMA Deployments

Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.

HPC Advisory Council Announces 4th Annual RDMA Programming Competition

Today the HPC Advisory Council announced its Fourth Annual RDMA Programming Competition in China. Designed to support undergraduate curriculum and talent development, this unique hands-on competition furthers students study, experience and mastery.