2019: The Year of PCI Express 4.0

Computer systems are about to get a whole lot faster. This year starting at the high end of the market a transition will begin toward systems based on PCI Express 4.0. The interconnect speed will double to 64GB/sec in a 16 lane connection. Tim Miller, Vice President Strategic Development for One Stop Systems, explores the expected speed and innovation stemming from the introduction of PCI Express 4.0. 

Lenovo’s Niagara Cluster Upgrade makes it Fastest Supercomputer in Canada

Today Lenovo unveiled the addition of 1,500 ultra-dense Lenovo ThinkSystem SD530 high-performance compute nodes for Niagara – Canada’s most-powerful research supercomputer. As the demand for high performance computing in quantitative research increases rapidly, the 4.6 Petaflop supercomputer will help Canadian researchers achieve meaningful results in artificial intelligence, astrophysics, climate change, oceanic research and other disciplines using big data.

Supermicro Rolls Out New SuperBlade with EDR InfiniBand and OmniPath

“Our new SuperBlade optimizes not just TCO, but initial acquisition cost with industry-leading server density and maximum performance per watt, per square foot and per dollar,” said Charles Liang, President and CEO of Supermicro. “Our 8U SuperBlade is also the first and only blade system that supports up to 205W Xeon CPUs, NVMe drives and 100G EDR IB or Omni-Path switches ensuring that this architecture is optimized for today and future proofed for the next generation of technology advancements, including next generation Intel Skylake processors.”

Video: Matching the Speed of SGI UV with Multi-rail LNet for Lustre

Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”

Slidecast: Announcing Mellanox ConnectX-5 100G InfiniBand Adapter

“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”

HPE and Mellanox: Advanced Technology Solutions for HPC

In this special guest feature, Scot Schultz from Mellanox and Terry Myers from HPE write that the two companies are collaborating to push the boundaries of high performance computing. “So while every company must weigh the cost and commitment of upgrading its data center or HPC cluster to EDR, the benefits of such an upgrade go well beyond the increase in bandwidth. Only HPE solutions that include Mellanox end-to-end 100Gb/s EDR deliver efficiency, scalability, and overall system performance that results in maximum performance per TCO dollar.”