Today Mellanox announced it has received the Award for Technology Innovation from Baidu, Inc. The award recognizes Mellanox’s achievements in designing and delivering a high-performance, low latency interconnect technology solution that positively impacts Baidu’s business. Mellanox Technologies received the award at the 2016 Baidu Datacenter Partner Conference, Baidu’s annual gathering of key datacenter partners, and was the only interconnect provider in this category.
“We are pioneering the area of virtualized clusters, specifically with SR-IOV,” said Philip Papadopoulos, SDSC’s chief technical officer. “This will allow virtual sub-clusters to run applications over InfiniBand at near-native speeds – and that marks a huge step forward in HPC virtualization. In fact, a key part of this is virtualization for customized software stacks, which will lower the entry barrier for a wide range of researchers by letting them project an environment they already know onto Comet.”
Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.
Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.
The University of Melbourne has launched a new HPC service called Spartan that combines traditional HPC with a flexible cloud computing component. “Many research projects demand high speed interconnect,” said Bernard Meade, Head of Research Computer Services at the University of Melbourne. “Spartan can quickly scale into cloud based virtual machines as needed, and expand the HPC system as user needs evolve. Traditional HPC systems are typically tailored for a few specific use cases, but in practice are used for a much wider variety of applications, resulting in less than optimal usage.”
Today the RSC Group out of Russia announced a new generation of high-performance scalable and energy-efficient RSC Tornado solution with direct liquid cooling based on the newest multi-core Intel Xeon Phi processor (previously code named as Knights Landing) on the day of global launch of this product. The new RSC solution has improved physical and computing density, high energy efficiency and provides stable operation in “hot water” mode with +63 °С cooling agent temperature.
In this this lively panel discussion from ISC 2016, moderator Addison Snell asks visionary leaders from the supercomputing community to comment on forward-looking trends that will shape the industry this year and beyond.
Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.
FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.
“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”