MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Slidecast: Announcing Mellanox ConnectX-5 100G InfiniBand Adapter

“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”

HPE and Mellanox: Advanced Technology Solutions for HPC

In this special guest feature, Scot Schultz from Mellanox and Terry Myers from HPE write that the two companies are collaborating to push the boundaries of high performance computing. “So while every company must weigh the cost and commitment of upgrading its data center or HPC cluster to EDR, the benefits of such an upgrade go well beyond the increase in bandwidth. Only HPE solutions that include Mellanox end-to-end 100Gb/s EDR deliver efficiency, scalability, and overall system performance that results in maximum performance per TCO dollar.”

Mellanox Introduces BlueField SoC Programmable Processors

Today Mellanox announced the BlueField family of programmable processors for networking and storage applications. “As a networking offload co-processor, BlueField will complement the host processor by performing wire-speed packet processing in-line with the network I/O, freeing the host processor to deliver more virtual networking functions (VNFs),” said Linley Gwennap, principal analyst at the Linley Group. “Network offload results in better rack density, lower overall power consumption, and deterministic networking performance.”

Cowboy Supercomputer Powers Research at Oklahoma State

In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”

Mellanox Powers OpenStack Cloud at University of Cambridge

Today Mellanox announced that University of Cambridge has selected Mellanox End-to-End Ethernet interconnect solution including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables for its OpenStack-based scientific research cloud. This new win has expanded Mellanox’s existing footprint of InfiniBand solution and empowers the UoC to realize its vision of HPC and Cloud convergence through high-speed cloud networks at 25/50/100Gb/s throughput.

Job of the Week: HPC Application Performance Engineer at Mellanox

Mellanox is seeking an HPC Application Performance Engineer in our Job of the Week. “Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads.”

Video: UPC++ Parallel Programming Extension

In this video from the 2016 OpenFabrics Workshop, Zili Zheng from LBNL presents: UPC++. “UPC++ is a parallel programming extension for developing C++ applications with the partitioned global address space (PGAS) model. UPC++ has demonstrated excellent performance and scalability with applications and benchmarks such as global seismic tomography, Hartree-Fock, BoxLib AMR framework and more. In this talk, we will give an overview of UPC++ and discuss the opportunities and challenges of leveraging modern network features.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”

Call for Participation: hpc-ch Forum on Intra- and Inter-Site Networking

The hpc-ch Forum on Intra- and Inter-Site Networking has posted its Call for Participation. Hosted by the University of Zurich, the event will take place Thursday, May 19, 2016.

Using High Performance Interconnects in Dynamic Environments

“Over the last years the OFA community has shown the potential of using high performance networks (InfiniBand) to boost the performance of virtualized cloud environments, however, the network reconfiguration challenges still continue to exist. In this session we present the work we have been doing on InfiniBand subnet management and routing, in the context of dynamic cloud environments. This work includes, but not limited to, techniques in order to provide better management scalability when virtual machines are live migrating, tenant network isolation in multi-tenant environments, and fast performance-driven network reconfiguration.”