Today Boston Limited and CoolIT Systems sent out congratulations to the student team from EPCC at The University of Edinburgh for their 1st place ranking for the Highest LINPACK in the history of the ISC’14 Student Cluster Challenge.
In this slidecast, Dan Olds from the Student Cluster Competition Blog and Brian Sparks from the HPC Advisory Council break down the field at the ISC’14 Student Cluster Competition. So how did it all turn out? Team South Africa won the overall competition for the second straight year and Team EPCC set a record with an amazing 10.14 Teraflops on LINPACK.
In the course of this talk, Intel’s Raj Hazra unveils details of the Knights Landing architecture including the new Omni Scale Fabric, an integrated, high performance interconnect designed for CPU to CPU communications. “The industry ecosystem needs to work together to tackle challenges in system architecture, programming models, and energy efficiency – all while lowering the thresholds for broader user access and usability.”
“The advances in NV-RAM promise exascale level throughput, however, the building and implementing of full solutions continues to be expensive. While the requirements on performance increase linearly, the requirements on capacity is ramping exponentially. Noting HDDs are increasing in capacity and speed, are these new drives good enough to fulfill these essential ares? Many in the industry suggest a combination of both is the right path, but that suggests a software stack capable of handling multi-level storage transparently and such software does not actu-ally exist in the HPC world today.”
“Infinite Memory Engine (IME) is next-generation storage system being designed by DDN to meet the needs of exascale supercomputers. IME employs innovative methods to obtain the superior performance efficiency from non-volatile memory technologies. These methods include optimistic non-blocking writes and means for dynamic load balancing output streams.”
In this video, Barry Davis from Intel describes the company’s new Omni Scale Fabric, an integrated, high performance interconnect designed for CPU to CPU communications. “”Intel is re-architecting the fundamental building block of HPC systems by integrating the Intel Omni Scale Fabric into Knights Landing, marking a significant inflection and milestone for the HPC industry,” said Charles Wuischpard, vice president and general manager of Workstations and HPC at Intel. “Knights Landing will be the first true many-core processor to address today’s memory and I/O performance challenges. It will allow programmers to leverage existing code and standard programming models to achieve significant performance gains on a wide set of applications. Its platform design, programming model and balanced performance makes it the first viable step towards exascale.”
In this slidecast, Mike Black from Micron describes the company’s Hybrid Memory Cube technology for the next-generation Xeon Phi processor, codenamed Knights Landing. “Delivering 5X the sustained memory bandwidth versus DDR4 with one-third the energy per bit in half the footprint, the Knights Landing high performance, on package memory combines high-speed logic and DRAM layers into one optimized package that will set a new industry benchmark for performance and energy efficiency.”