Today DDN announced the winners of the 2015 Pioneer User Awards. The awards recognize and celebrate visionary individuals, organizations and/or multiple people who are embracing leading-edge high performance computing technologies to shatter long-standing technical limits and to accelerate business results and scientific insights.
Dr. Lewey Anton reports on who’s moving on up in High Peformance Computing. Familiar names in this edition include: Sharan Kalwani, John Lee, Jay Muelhoefer, Brian Sparks, and Ed Turkel. And be sure to let us know of HPC folks in new positions!
Penguin Computing has renewed as a Platinum Member of Open Compute Project (OCP). Leading with the OCP-based Tundra Extreme Scale (ES) Series, Penguin was recently awarded the CTS-1 contract with the NNSA to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.
NREL in Golden, Colorado is seeking an HPC Algorithm and Software Engineer for Energy Systems in our Job of the Week.
“In this presentation, we will discuss several important goals and requirements of portable standards in the context of OpenMP. We will also encourage audience participation as we discuss and formulate the current state-of-the-art in this area and our hopes and goals for the future. We will start by describing the current and next generation architectures at NERSC and OLCF and explain how the differences require different general programming paradigms to facilitate high-performance implementations.”
Today NetApp announced it has entered into a definitive agreement to acquire SolidFire for $870 million in cash. Founded in 2010, SolidFire is a market leader in all-flash storage systems built for the next-generation data center where simple scaling, set-and-forget management, assured performance and multi-tenancy, and cloud economic models are driving new market growth.
Thomas Schulthess from CSCS presented this talk at the Nvidia booth at SC15. “On October 1, 2015 “Piz Kesch”, a Cray CS-Storm system with NVIDIA K80 GPUs became operational at CSCS on behalf of MeteoSwiss. In this talk, we will discuss the hardware-software co-design project behind this most cost and energy efficient system for numerical weather prediction.”
In this video from SC15, Dr. Eng Lim Goh from SGI describes how the company is embracing new HPC technology trends such as new memory hierarchies. With the convergence of HPC and Big Data as a growing trend, SGI is envisions a “Zero Copy Architecture” that would bring together a traditional supercomputer with a Big Data analytics machine in a way that would not require users to move their data between systems.
“This presentation will describe how OpenMP is used at NERSC. NERSC is the primary supercomputing facility for Office of Science in the US Depart of Energy (DOE). Our next production system will be an Intel Xeon Phi Knights Landing (KNL) system, with 60+ cores per node and 4 hardware threads per core. The recommended programming model is hybrid MPI/OpenMP, which also promotes portability across different system architectures.”
The High Performance Conjugate Gradients (HPCG) benchmark continues to gain traction in the high-performance computing community. “HPCG is designed to complement the traditional High Performance Linpack (HPL) benchmark used as the official metric for ranking the top 500 systems,” said Sandia National Laboratories researcher Mike Heroux, who developed the HPCG program in collaboration with Jack Dongarra and Piotr Luszczek from the University of Tennessee.