Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.
Do you have new technology that could disrupt HPC in the near future? There’s still time to get free exhibit space at SC16 in November. “At the SC16 Emerging Technologies Showcase, we invite submissions from industry, academia, and government researchers.
Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.
The Department of Energy’s Energy Sciences Network (ESnet) has published a 3D timeline celebrating thirty years of service. With the launch of an interactive timeline, viewers can explore ESnet’s history and contributions.
“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”
Today the InfiniBand Trade Association (IBTA) and the OpenFabrics Alliance (OFA) today announced that 204 of the world’s most powerful supercomputers accelerate performance through InfiniBand and OpenFabrics Software (OFS). At 41 percent of the TOP500 List, the InfiniBand fabric, together with OFS open source software, continues to be the interconnect of choice for the leading supercomputing systems. Furthermore, InfiniBand and OFS systems outperform competing technologies in overall efficiency, scoring an 85 percent list average for compute efficiency—with one system achieving a remarkable 99.8 percent.
The University of Melbourne has launched a new HPC service called Spartan that combines traditional HPC with a flexible cloud computing component. “Many research projects demand high speed interconnect,” said Bernard Meade, Head of Research Computer Services at the University of Melbourne. “Spartan can quickly scale into cloud based virtual machines as needed, and expand the HPC system as user needs evolve. Traditional HPC systems are typically tailored for a few specific use cases, but in practice are used for a much wider variety of applications, resulting in less than optimal usage.”
This visualization from David Ellsworth and Tim Sandstrom at NASA/AMES shows the evolution of a giant molecular cloud over 700,000 years. It ran on the Pleiades supercomputer using the ORION2 code developed at the University of California, Berkeley. It depicts how gravitational collapse leads to the formation of an infrared dark cloud (IRDC) filament in which protostars begin to develop, shown by the bright orange luminosity along the main and surrounding filaments.
In this special guest feature, Jane Glasser writes that Saudi Arabia has moved into the global supercomputing top ten with Shaheen II, a 200,000-core behemoth that’s taming global warming, earthquakes, and more. “With 5.536 Pflops of sustained LINPACK performance, Shaheen II is the largest and most powerful supercomputer in the Middle East and the tenth fastest supercomputer in the world, according to the June 2016 TOP500 list.”
In this this lively panel discussion from ISC 2016, moderator Addison Snell asks visionary leaders from the supercomputing community to comment on forward-looking trends that will shape the industry this year and beyond.