Sure, your code seems fast, but how do you know if you are leaving potential performance on the table? Recognized HPC experts Georg Hager and Gerhard Wellein will teach a tutorial on Node-Level Performance Engineering at SC16. The session will take place 8:30-5:00pm on Sunday, Nov. 13 in Salt Lake City.
In this video from the 2016 Argonne Training Program on Extreme-Scale Computing, Mark Miller from LLNL leads a panel discussion on Experiences in eXtreme Scale in HPC with FASTMATH team members. “The FASTMath SciDAC Institute is developing and deploying scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborating with U.S. Department of Energy (DOE) domain scientists to ensure the usefulness and applicability of our work. The focus of our work is strongly driven by the requirements of DOE application scientists who work extensively with mesh-based, continuum-level models or particle-based techniques.”
Nikos Trikoupis from the City University of New York gave this talk at the HPC User Forum in Austin. “We focus on measuring the aggregate throughput delivered by 12 Intel SSD DC P3700 for NVMe cards installed on the SGI UV 300 scale-up system in the CUNY High Performance Computing Center. We establish a performance baseline for a single SSD. The 12 SSDs are assembled into a single RAID-0 volume using Linux Software RAID and the XVM Volume Manager. The aggregate read and write throughput is measured against different configurations that include the XFS and the GPFS file systems.”
Today Nvidia announced the general availability of CUDA 8 toolkit for GPU developers. “A crucial goal for CUDA 8 is to provide support for the powerful new Pascal architecture, the first incarnation of which was launched at GTC 2016: Tesla P100,” said Nvidia’s Mark Harris in a blog post. “One of NVIDIA’s goals is to support CUDA across the entire NVIDIA platform, so CUDA 8 supports all new Pascal GPUs, including Tesla P100, P40, and P4, as well as NVIDIA Titan X, and Pascal-based GeForce, Quadro, and DrivePX GPUs.”
Today at GTC Europe, Nvidia unveiled Xavier, an all-new SoC based on the company’s next-gen Volta GPU, which will be the processor in future self-driving cars. According to Huang, the ARM-based Xavier will feature unprecedented performance and energy efficiency, while supporting deep-learning features important to the automotive market. A single Xavier-based AI car supercomputer will be able to replace today’s fully configured DRIVE PX 2 with two Parker SoCs and two Pascal GPUs.
In this video from LUG 2016 in Australia, Chakravarthy Nagarajan from Intel presents: An Optimized Entry Level Lustre Solution in a Small Form Factor. “Our goal was to provide an entry level Lustre storage solution in a high density form factor, with a low cost, small footprint, all integrated with Intel Enterprise Edition for Lustre* software.”
Today D-Wave Systems announced details of its most advanced quantum computing system, featuring a new 2000-qubit processor. The announcement is being made at the company’s inaugural users group conference in Santa Fe, New Mexico. The new processor doubles the number of qubits over the previous generation D-Wave 2X system, enabling larger problems to be solved and extending D-Wave’s significant lead over all quantum computing competitors. The new system also introduces control features that allow users to tune the quantum computational process to solve problems faster and find more diverse solutions when they exist. In early tests these new features have yielded performance improvements of up to 1000 times over the D-Wave 2X system.
“Starting in 2015, Oak Ridge National Laboratory partnered with the University of Tennessee to offer a minor-degree program in data center technology and management, one of the first offerings of its kind in the country. ORNL staff members developed the senior-level course in collaboration with UT College of Engineering professor Mark Dean after an ORNL strategic partner identified a need for employees who could bridge both the facilities and operational aspects of running a data center. In addition to developing the course curriculum, ORNL staff members are also serving as guest lecturers.”
Maria Chan from NST presented this talk at Argonne Out Loud. “People eagerly anticipate environmental benefits from advances in clean energy technologies, such as advanced batteries for electric cars and thin-film solar cells. Optimizing these technologies for peak performance requires an atomic-level understanding of the designer materials used to make them. But how is that achieved? Maria Chan will explain how computer modeling is used to investigate and even predict how materials behave and change, and how researchers use this information to help improve the materials’ performance. She will also discuss the open questions, challenges, and future strategies for using computation to advance energy materials.”
Larry Smarr presented this talk as part of NCSA’s 30th Anniversary Celebration. “For the last thirty years, NCSA has played a critical role in bringing computational science and scientific visualization to the national user community. I will embed those three decades in the 50 year period 1975 to 2025, beginning with my solving Einstein’s equations for colliding black holes on the megaFLOPs CDC 6600 and ending with the exascale supercomputer. This 50 years spans a period in which we will have seen a one trillion-fold increase in supercomputer speed.”