Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”
Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.
Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.
“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”
The University of Melbourne has launched a new HPC service called Spartan that combines traditional HPC with a flexible cloud computing component. “Many research projects demand high speed interconnect,” said Bernard Meade, Head of Research Computer Services at the University of Melbourne. “Spartan can quickly scale into cloud based virtual machines as needed, and expand the HPC system as user needs evolve. Traditional HPC systems are typically tailored for a few specific use cases, but in practice are used for a much wider variety of applications, resulting in less than optimal usage.”
This visualization from David Ellsworth and Tim Sandstrom at NASA/AMES shows the evolution of a giant molecular cloud over 700,000 years. It ran on the Pleiades supercomputer using the ORION2 code developed at the University of California, Berkeley. It depicts how gravitational collapse leads to the formation of an infrared dark cloud (IRDC) filament in which protostars begin to develop, shown by the bright orange luminosity along the main and surrounding filaments.
Today the RSC Group out of Russia announced a new generation of high-performance scalable and energy-efficient RSC Tornado solution with direct liquid cooling based on the newest multi-core Intel Xeon Phi processor (previously code named as Knights Landing) on the day of global launch of this product. The new RSC solution has improved physical and computing density, high energy efficiency and provides stable operation in “hot water” mode with +63 °С cooling agent temperature.
The University of Tokyo has chosen SGI to perform advanced data analysis and simulation within its Information Technology Center. The center is one of Japan’s major research and educational institutions for building, applying, and utilizing large computer systems. The new SGI system will begin operation July 1, 2016. “The SGI integrated supercomputer system for data analysis and simulation will support the needs of scientists in new fields such as genome analysis and deep learning in addition to scientists in traditional areas of computational science,” said Professor Hiroshi Nakamura, director of Information Technology Center, the University of Tokyo. “The new system will further ongoing research and contribute to the development of new academic fields that combine data analysis and computational science.”
Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.
FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.