Archives for July 2017

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions. “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos ASC program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

A Scalable Object Store for Meteorological and Climate Data

Simon D. Smart gave this talk at the PASC17 conference. “Numerical Weather Prediction (NWP) and Climate simulations sit in the intersection between classically understood HPC and the Big Data communities. Driven by ever more ambitious scientific goals, both the size and number of output data elements generated as part of NWP operations has grown by several orders of magnitude, and will continue to grow into the future. This poses significant scalability challenges for the data processing pipeline, and the movement of data through and between stages is one of the most significant factors in this.”

Survey: Training and Support #1 Concern for the HPC Community

Initial results of the Scientific Computing World (SCW) HPC readership survey have shown training and support for HPC resources are the number one concern for both those that operate and manage HPC facilities and researchers using HPC resources. “Several themes have emerged as a priority to both HPC managers and users/researchers. Respondents cite that training and support are essential parameters compared to performance, hardware or the availability of HPC resources.”

Video: ddR – Distributed Data Structures in R

“A few weeks ago, we revealed ddR (Distributed Data-structures in R), an exciting new project started by R-Core, Hewlett Packard Enterprise, and others that provides a fresh new set of computational primitives for distributed and parallel computing in R. The package sets the seed for what may become a standardized and easy way to write parallel algorithms in R, regardless of the computational engine of choice.”

Altair Showcases PBScloud.IO at ISC 2017

In this video, Jérémie Bourdoncle from Altair describes the company’s new PBScloud.IO offering for private clouds. “Altair is excited to announce the availability of PBScloud.io, its latest appliance solution to enable and expand cloud computing for organizations. PBScloud.io allows users to model, build and run High Performance Computing (HPC) appliances on both public and private clouds, as well as bare-metal infrastructures.

Job of the Week: Software Engineer at NCAR

NCAR in Boulder is seeking a Software Engineer in our Job of the Week. “This position focuses primarily on the development of tools to meet the needs for the NCAR/IT community, and the design, writing, implementation, and support for systems monitoring tools necessary for the management of the computer infrastructure. Support will also be provided to the research community for the development of web-based analysis tools and general web programming.”

OpenACC Brings Directives to Accelerated Computing at ISC 2017

In this video from ISC 2017, Sunita Chandrasekaran and Michael Wolfe describe how OpenACC makes GPU-accelerated computing more accessible to scientists and engineers. “OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model.”

Agenda Posted: August MVAPICH User Group Meeting in Ohio

The MVAPICH User Group Meeting (MUG) has posted its meeting agenda. The event takes place August 14-16, 2017 in Columbus, Ohio. “As the annual gathering of MVAPICH2 users, researchers, developers, and system administrators, the MUG event includes Keynote Talks, Invited Tutorials, Invited Talks, Contributed Presentations, Open MIC session, and hands-on sessions.”

Red Hat Ceph Storage Powers Research at the University of Alabama at Birmingham

On June 6, Red Hat announced that the University of Alabama at Birmingham (UAB) is using Red Hat Ceph Storage to support the growing needs of its research community. UAB selected Red Hat Ceph Storage because it offers researchers a flexible platform that can accommodate the vast amounts of data necessary to support future innovation and discovery. “UAB is a leader in computational research, with more than $500 million in annual research expenditures in areas including engineering, statistical genetics, genomics and next-generation gene sequencing,” said Curtis A. Carver Jr., VP and CIO at UAB. “Researchers and students aggregate, analyze, and store massive amounts of data, which is used to support groundbreaking medical discoveries from new cancer biomarkers to state-of-the-art magnetic resonance imaging techniques.”

Video: Towards Quantum High Performance Computing

“Following an introduction to the exceptional computational power of quantum computers using analogies with classical high performance computing systems, I will discuss real-world application problems that can be tackled on medium scale quantum computers but not on post exa-scale classical computers. I will motivate hardware software co-design of quantum accelerators to classical supercomputers and the need for educating a new generation of quantum software engineers with knowledge both in quantum computing and in high performance computing.”