“With our latest innovations incorporating Intel Xeon Phi processors in a performance and density optimized Twin architecture and 100Gbps OPA switch for high bandwidth connectivity, our customers can accelerate their applications and innovations to address the most complex real world problems.”
Today Mellanox announced it has received the Award for Technology Innovation from Baidu, Inc. The award recognizes Mellanox’s achievements in designing and delivering a high-performance, low latency interconnect technology solution that positively impacts Baidu’s business. Mellanox Technologies received the award at the 2016 Baidu Datacenter Partner Conference, Baidu’s annual gathering of key datacenter partners, and was the only interconnect provider in this category.
In this Intel Chip Chat podcast, Alyson Klein and Charlie Wuischpard describe Intel’s investment to break down walls to HPC adoption and move innovation forward by thinking at a system level. “Charlie discusses the announcement of the Intel Xeon Phi processor, which is a foundational element of Intel Scalable System Framework (Intel SSF), as well as Intel Omni-Path Fabric. Charlie also explains that these enhancements will make supercomputing faster, more reliable, and increase efficient power consumption; Intel has achieved this by combining the capabilities of various technologies and optimizing ways for them to work together.”
“We are pioneering the area of virtualized clusters, specifically with SR-IOV,” said Philip Papadopoulos, SDSC’s chief technical officer. “This will allow virtual sub-clusters to run applications over InfiniBand at near-native speeds – and that marks a huge step forward in HPC virtualization. In fact, a key part of this is virtualization for customized software stacks, which will lower the entry barrier for a wide range of researchers by letting them project an environment they already know onto Comet.”
Today Mellanox announced that the University of Tokyo has selected the company’s Switch-IB 2 EDR 100Gb/s InfiniBand Switches and ConnectX-4 adapters to accelerate its new supercomputer for computational science.
Do you have new technology that could disrupt HPC in the near future? There’s still time to get free exhibit space at SC16 in November. “At the SC16 Emerging Technologies Showcase, we invite submissions from industry, academia, and government researchers.
Today Mellanox announced the availability of new software drivers for RoCE (RDMA over Converged Ethernet). The new drivers are designed to simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation. This enables cloud, storage, and enterprise customers to deploy RoCE more quickly and easily while accelerating application performance, improving infrastructure efficiency and reducing cost.
The Department of Energy’s Energy Sciences Network (ESnet) has published a 3D timeline celebrating thirty years of service. With the launch of an interactive timeline, viewers can explore ESnet’s history and contributions.
“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”
Today the InfiniBand Trade Association (IBTA) and the OpenFabrics Alliance (OFA) today announced that 204 of the world’s most powerful supercomputers accelerate performance through InfiniBand and OpenFabrics Software (OFS). At 41 percent of the TOP500 List, the InfiniBand fabric, together with OFS open source software, continues to be the interconnect of choice for the leading supercomputing systems. Furthermore, InfiniBand and OFS systems outperform competing technologies in overall efficiency, scoring an 85 percent list average for compute efficiency—with one system achieving a remarkable 99.8 percent.