Today the University of Alabama at Birmingham unveiled a new supercomputer powered by Dell. With a peak performance of 110 Teraflops, the system is 10 times faster than its predecessor. “With their new Dell EMC HPC cluster, UAB researchers will have the compute and storage they need to aggressively research, uncover and apply knowledge that changes the lives of individuals and communities in many areas, including genomics and personalized medicine.”
The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.
Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.
The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.
Today Mellanox announced that SysEleven in Germany used the company’s 25/50/100GbE Open Ethernet solutions to build a new SSD-based, fully-automated cloud datacenter. “We chose the Mellanox suite of products because it allows us to fully automate our state-of-the-art Cloud data center,” said Harald Wagener, CTO, SysEleven. “Mellanox solutions are highly scalable and cost effective, allowing us to leverage the Company’s best-in-class Ethernet technology that features the industry’s best bandwidth with the flexibility of the OpenStack open architecture.”
The use of Co-Design and offloading are important tools in achieving Exascale computing. Application developers and system designers can take advantage of network offload and emerging co-design protocols to accelerate their current applications. Adopting some basic co-design and offloading methods to smaller scale systems can achieve more performance on less hardware resulting in low cost and higher throughput. Learn more by downloading this guide.
Today Mellanox announced the opening of its new APAC headquarters and solutions centre in Singapore. The Company’s new APAC headquarters will feature a technology solution centre for showcasing the latest technologies from Mellanox, in addition to an executive briefing facility. The solution centre will feature the innovative solutions enabled by latest Mellanox technologies including HPC, Cloud, Big Data, and storage.
“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
“We’ve seen the rapid evolution of SSDs and have been contributing to the NVMe over Fabrics standard and community drivers,” said Michael Kagan, CTO at Mellanox Technologies. “Because faster storage requires faster networks, we designed the highest-speeds and most intelligent offloads into both our ConnectX-5 and BlueField families. This lets us connect many SSDs directly to the network at full speed, without the need to dedicate many CPU cores to managing data movement, and we provide a complete end-to-end networking solution with the highest-performing 25, 50, and 100GbE switches and cables as well.”