Over at the VMware CTO Office, Josh Simons writes that HPC efforts are stepping at the company with new staffing and some exciting InfiniBand performance improvements that could help make virtualization a widespread technology for high performance computing.
Today Mellanox announced that the National Supercomputer Centre at Linköping University (NSC Sweden) and Meteo France have selected the company’s FDR 56Gb/s InfiniBand solutions for mission critical, high-performance computing applications.
“To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance.”
“Exascale levels of computing pose many system- and application-level computational challenges. Mellanox as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”
“Major cloud providers and Web 2.0 companies have converged on RoCE to solve the challenges of running compute intensive applications and processing massive amounts of data in hyperscale networking environments,” said Barry Barnett, co-chair of the InfiniBand Trade Association. “The RoCEv2 standard enables multi-vendor, interoperable solutions delivering RDMA that spans hyperscale network environments. This in turn paves the way for broader adoption within enterprise environments in order to improve infrastructure efficiency and lower total cost of ownership.”
“Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X software libraries, developed by his research group, are currently being used by more than 2,150 organizations worldwide (in 72 countries).”
“MPI is in the national interest. The U.S. government tasks Lawrence Livermore National Laboratory with solving the nation’s and the world’s most difficult problems. This ranges from global security, disaster response and planning, drug discovery, energy production, and climate change to name a few. To meet this challenge, LLNL scientists utilize large-scale computer simulations on Linux clusters with Infiniband networks. As such, MVAPICH serves a critical role in this effort. In this talk, I will highlight some of this recent work that MVAPICH has enabled.”