“The University of Colorado, Boulder supports researchers’ large-scale computational needs with their newly optimized high performance computing system, Summit. Summit is designed with advanced computation, network, and storage architectures to deliver accelerated results for a large range of HPC and big data applications. Summit is built on Dell EMC PowerEdge Servers, Intel Omni-Path Architecture Fabric and Intel Xeon Phi Knights Landing processors.”
“With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount.”
In this video, Maurizio Davini from the University of Pisa describe how the University works with Dell EMC and Intel to test new technologies, integrate and optimize HPC systems with Intel HPC Orchestrator software. “We believe these two companies are at the forefront of innovation in high performance computing,” said University CTO Davini. “We also share a common goal of simplifying HPC to support a broader range of users.”
The University of Connecticut has partnered with Dell EMC and Intel to create a high performance computing cluster that students and faculty can use in their research. With this HPC Cluster, UConn researchers can solve problems that are computationally intensive or involve massive amounts of data in a matter of days or hours, instead of weeks. The HPC cluster operated on the Storrs campus features 6,000 CPU cores, a high-speed fabric interconnect, and a parallel file system. Since 2011, it has been used by over 500 researchers, from each of the university’s schools and colleges, for over 40 million hours of scientific computation.
The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.
In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. “Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system.”
Today, high performance interconnects can be divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. Ethernet is established as the dominant low level interconnect standard for mainstream commercial computing requirements. InfiniBand originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet, and vendor specific technologies frequently have a time to market (and therefore performance) advantage over standardized offerings.
In this video from SC16, Garima Kochhar from Dell EMC describes the CryoEM Demo on the Dell PowerEdge C6320 rack server powered by Intel Xeon and Intel Xeon Phi. “This demo presents performance results for the 2D alignment and 2D classification phases of the Cryo-electron microscopy (Cryo-EM) data processing workflow using the new Intel Knights Landing architecture, and compares these results to the performance of the Intel Xeon E5-2600 v4 family.”
A survey conducted by insideHPC and Gabriel Consulting in Q4 of 2105 indicated that nearly 45% of HPC and large enterprise customers would spend more on system interconnects and I/O in 2016, with 40% maintaining spending at the same level as the prior year. For manufacturing, the largest subset representing approximately one third of the respondents, over 60% were planning to spend more and almost 30% maintaining the same level of spending going into 2016 implying the critical value of high performance interconnects.
“Learn how you can cost effectively accelerate innovation with a secure private cloud environment, hosted and managed by Dell and R Systems Bare Metal Solution. The HPC infrastructure consists of Dell processing, power, storage and memory capacity. R Systems provides white-glove HPC services with custom solutions in your choice of locations, including their company-owned data centers in Champaign, Illinois located at the University of Illinois’ Research Park. R Systems offers complete Dell hardware-based systems, as well as custom engagements/configurations based on specific business needs.”