Today AliCloud signed a strategic partnership with Nvidia to provide the first GPU-based cloud HPC platform in China. The partnership also plans to provide emerging companies support in areas of HPC and deep learning with comprehensive GPU (Graphics Processing Unit) computing. “Innovative companies in deep learning are one of our most important user communities,” said Zhang Wensong, chief scientist of AliCloud. “Together with Nvidia, AliCloud will use its strength in public cloud computing and experiences accumulated in HPC to offer emerging companies in deep learning greater support in the future.”
Today the Brookhaven National Laboratory announced that it has expanded its Computational Science Initiative (CSI). The programs within this initiative leverage computational science, computer science, and mathematics expertise and investments across multiple research areas at the Laboratory-including the flagship facilities that attract thousands of scientific users each year-further establishing Brookhaven as a leader in tackling the “big data” challenges at experimental facilities and expanding the frontiers of scientific discovery.
In this video from SC15, Karl Schulz from Intel and Michael Miller from SUSE describe the all-new OpenHPC Community. “The use of open source software is central to HPC, but lack of a unified community across key stakeholders – academic institutions, workload management companies, software vendors, computing leaders – has caused duplication of effort and has increased the barrier to entry,” said Jim Zemlin, executive director, The Linux Foundation. “OpenHPC will provide a neutral forum to develop one open source framework that satisfies a diverse set of cluster environment use-cases.”
In this video from SC15, Intel’s Barry Davis and and Scott Misage from Hewlett Packard Enterprise describe how their two company’s are driving HPC innovation with the Intel Scalable System Framework and Intel Omni-Path interconnect technologies. “As a result of a new alliance with Intel, HPE is offering its HPC Solutions Framework based on HPE Apollo servers, which are specialized for HPC and now optimized to support industry- specific software applications from leading independent software vendors. These solutions will dramatically simplify the deployment of HPC for customers in industries such as oil and gas, life sciences and financial services.”
Today the Linux Foundation announced plans to form the OpenHPC Collaborative Project. This project will provide a new, open source framework to support the world’s most sophisticated HPC environments. “The use of open source software is central to HPC, but lack of a unified community across key stakeholders – academic institutions, workload management companies, software vendors, computing leaders – has caused duplication of effort and has increased the barrier to entry,” said Jim Zemlin, executive director, The Linux Foundation. “OpenHPC will provide a neutral forum to develop one open source framework that satisfies a diverse set of cluster environment use-cases.”
Today the Ethernet Alliance shared details of its recent 25Gb/s technical feasibility event in New Hampshire. With 25Gb/s technologies being driven in part by hyperscale data center and cloud services market needs, the productive event drew industry-wide support and participation. The event produced promising results, with a high percentage of tests exceeding expected requirements of the proposed IEEE 25Gb/s standard, and achieving a success rate of greater than 86 percent for all test cases performed.
“The UCX Unified Communication X project is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for data centric and high-performance applications. At the core of the UCX project are the combined features, ideas, and concepts of industry leading technologies including MXM, PAMI and UCCS. Mellanox Technologies has contributed their MXM technology, which provides enhancements to parallel communication.”
In this video (with transcript) from the 2015 HPC User Forum in Broomfield, Bob Sorenson from IDC moderates a User Agency panel discussion on the NSCI initiative. “You all have seen that usable statement inside the NSCI, and we are all about trying to figure out how to make usable machines. That is a key critical component as far, as we’re concerned. But the thing that I think we’re really seeing, we talked about the fact that a single thread performance is not increasing, and so what we’re doing is we’re simply increasing the parallelism and then the physics limitations, if you will, of how you cool and distribute power among the parts that are there. That really is leading to a paradigm shift from something that’s based on how fast you can crunch the numbers to how fast you can feed the chips with data. It’s really that paradigm shift, I think, more than anything else that’s really going to change the way that we have to do our computing.”
Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute will combine decades of expertise to help American industry and businesses expand use of high performance computing under a recently signed memorandum of understanding.
Today Cray announced a world record by scaling ANSYS Fluent to 129,000 compute cores. “Less than a year ago, ANSYS announced Fluent had scaled to 36,000 cores with the help of NCSA. While the nearly 4x increase over the previous record is significant, it tells only part of the story. ANSYS has broadened the scope of simulations allowing for applicability to a much broader set of real-world problems and products than any other company offers.”