Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). “The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community,” said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin’s Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.
High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.
In his keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.
DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”
In this podcast, the Radio Free HPC team discusses a recent presentation by John Gustafson on Next Generation Computer Arithmetic. “A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits.”
The PEARC17 Conference has issued its Call for Participation. Formerly known as the Extreme Science and Engineering Discovery Environment (XSEDE) annual conference, PEARC17 will take place July 9-13 in New Orleans. “The Technical Program for the PEARC17 includes four Paper tracks, Tutorials, Posters, a Visualization Showcase and Birds of a Feather (BoF) sessions. All submissions should emphasize experiences and lessons derived from operation and use of advanced research computing on campuses or provided for the academic and open science communities. Submissions aligned with the conference theme—Sustainability, Success, and Impact—are particularly encouraged.”
“Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016.”
Today Intel announced the open-source BigDL, a Distributed Deep Learning Library for the Apache Spark* open-source cluster-computing framework. “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project,” said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.
Leaders in hybrid accelerated HPC in the United States, Japan, and Switzerland have signed a memorandum of understanding establishing an international institute dedicated to common goals, the sharing of HPC expertise, and forward-thinking evaluation of computing architecture. “Forecasting the future of leadership-class computing and managing the risk of architectural change is a shared interest among ORNL, Tokyo Tech, and ETH Zurich,” said Jeff Nichols, associate laboratory director of computing and computational sciences at ORNL. “What unites our three organizations is a willingness to embrace change, actively partner with HPC vendors, and devise solutions that advance the work of our scientific users. ADAC provides a framework for member organizations to pursue mutual interests such as accelerated node architectures as computing moves toward the exascale era and beyond.”
“We are very excited to be working closely with Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of our OpenHPEC Accelerator Suite software development toolset,” said Lynn Bamford, Senior Vice President and General Manager, Defense Solutions division. “Together, we are providing HPEC system integrators with proven and robust development tools from the Commercial HPC market to speed and ease the design of COTS-based highly scalable supercomputer-class solutions.”