Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
A new World Record was set by the Huazhong University team at the Student Cluster Competition at ISC 2016. Using Nvidia Tesla K80 GPUs, the team recorded 12.56 teraflops on the LINPACK benchmark, while staying within a 3-Kw power consumption limit.
In this video from ISC 2016, Marc Hamilton from Nvidia describes the new DGX-1 Deep Learning Supercomputer. “The NVIDIA DGX-1 is the world’s first purpose-built system for deep learning with fully integrated hardware and software that can be deployed quickly and easily. Its revolutionary performance significantly accelerates training time, making the NVIDIA DGX-1 the world’s first deep learning supercomputer in a box.”
At ISC 2016, Supermicro debuted the latest innovations in HPC architectures and technologies including a 2U 4-Node server supporting new Intel Xeon Phi processors (formerly code named Knights Landing) with integrated or external Intel Omni-Path fabric option, together with associated 4U/Tower development workstation; 1U SuperServer supporting up to 4 GPU including the next generation P100 GPU; Lustre High Performance File system; and 1U 48-port top-of-rack network switch with 100Gbps Intel Omni-Path Architecture (OPA) providing a unique HPC cluster solution offering excellent bandwidth, latency and message rate that is highly scalable and easily serviceable.
In this video from ISC 2016, Steve Branton from Asetek describes the company’s innovative liquid cooling solutions for HPC. “Because liquid is 4,000 times better at storing and transferring heat than air, Asetek’s solutions provide immediate and measurable benefits to large and small data centers alike. RackCDU D2C is a “free cooling” solution that captures between 60% and 80% of server heat, reducing data center cooling cost by over 50% and allowing 2.5x-5x increases in data center server density. D2C removes heat from CPUs, GPUs, memory modules within servers using water as hot as 40°C (105°F), eliminating the need for chilling to cool these components.”
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. “For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.”
Over at CSCS, Simone Ulmer writes that Particle physicists using the Piz Daint supercomputer have determined what is known as the scalar quark content of the proton. The research will help efforts to detect and research dark matter.
Today system integrator Nor-Tech disclosed that the company is working closely with some of the world’s top researchers and innovators to develop, build, deploy and support simulation clusters. “This has been an extremely exciting year for us that has allowed us to collaborate on innovations that promise to be groundbreaking and also discoveries that are changing the way we look at the universe,” said Nor-Tech President and CEO David Bollig.
Today Exxact Corporation announced its planned production of HPC solutions using the NVIDIA Tesla P100 GPU accelerator for PCIe. Exxact will be integrating the Tesla P100 into their Quantum family of servers, which are currently offered with either NVIDIA Tesla M40 or K80 GPUs. The NVIDIA Tesla P100 for PCIe-based servers was introduced at the recent 2016 International Supercomputing Conference and is anticipated to deliver massive leaps in performance and value compared with CPU-based systems. NVIDIA stated the new Tesla P100 will help meet unprecedented computational demands planted on modern data centers.
The University of Tokyo has chosen SGI to perform advanced data analysis and simulation within its Information Technology Center. The center is one of Japan’s major research and educational institutions for building, applying, and utilizing large computer systems. The new SGI system will begin operation July 1, 2016. “The SGI integrated supercomputer system for data analysis and simulation will support the needs of scientists in new fields such as genome analysis and deep learning in addition to scientists in traditional areas of computational science,” said Professor Hiroshi Nakamura, director of Information Technology Center, the University of Tokyo. “The new system will further ongoing research and contribute to the development of new academic fields that combine data analysis and computational science.”