Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


How GPU Hackathons Bring HPC to More Users

“GPUs potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. Enter GPU Hackathons; Our mentors come from national laboratories, universities and vendors, and besides having extensive experience in programming GPUs, many of them develop the GPU-capable compilers and help define standards such as OpenACC and OpenMP.”

ADIOS 1.11 Middleware Moves I/O Framework from Research to Production

The Department of Energy’s Oak Ridge National Laboratory has announced the latest release of its Adaptable I/O System (ADIOS), a middleware that speeds up scientific simulations on parallel computing resources such as the laboratory’s Titan supercomputer by making input/output operations more efficient. “As we approach the exascale, there are many challenges for ADIOS and I/O in general,” said Scott Klasky, scientific data group leader in ORNL’s Computer Science and Mathematics Division. “We must reduce the amount of data being processed and program for new architectures. We also must make our I/O frameworks interoperable with one another, and version 1.11 is the first step in that direction.”

ORNL’s Al Geist to Keynote OpenFabrics Workshop in Austin

In his keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.

Oak Ridge steps up to Active Archive Solutions

Today the Active Archive Alliance announced that Oak Ridge National Laboratory (ORNL) has upgraded its active archive solutions to enhance the integrity and accessibility of its vast amount of data. The new solutions allow ORNL to meet its increasing data demands and enable fast file recall for its users. “These active archive upgrades were crucial to ensuring our users’ data is both accessible and fault-tolerant so they can continue performing high-priority research at our facilities,” said Jack Wells, director of science for the National Center for Computational Sciences at ORNL. “Our storage-intensive users have been very pleased with our new data storage capabilities.”

Podcast: Supercomputing Cancer Research and the Human Brain

In this WUOT podcast, Jack Wells from ORNL describes how the Titan supercomputer helps advance science. “The world’s third-most powerful supercomputer is located in Oak Ridge, and though it bears the imposing name TITAN, its goals and capabilities are more quotidian than dystopian. After that, WUOT’s Megan Jamerson tells us about a project at ORNL that uses TITAN to help humans digest vast sums of information from medical reports. If successful, the project could create new understandings about the demographics of cancer.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”

Job of the Week: Computational Scientist at ORNL

Oak Ridge National Laboratory is seeking a Computational Scientist in our Job of the Week. The National Center for Computational Sciences in the Computing and Computational Sciences Directorate at the Oak Ridge National Laboratory (ORNL) seeks to hire Computational Scientists. We are looking in the areas of Computational Climate Science, Computational Astrophysics, Computational Materials Science, […]

Supercomputing Subatomic Particle Research on Titan

By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. “GPUs provide a lot of memory bandwidth,” Clark said. “Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker.” In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups.”

Beauty Meets HPC: An Overview of the Barcelona Supercomputing Center

“The multidisciplinary research team and computational facilities –including MareNostrum– make BSC an international centre of excellence in e-Science. Since its establishment in 2005, BSC has developed an active role in fostering HPC in Spain and Europe as an essential tool for international competitiveness in science and engineering. The center manages the Red Española de Supercomputación (RES), and is a hosting member of the Partnership for Advanced Computing in Europe (PRACE) initiative.”

2017 GPU Hackathons Coming to U.S. and Europe

Today ORNL announced the full schedule of 2017 GPU Hackathons at multiple locations around the world. “The goal of each hackathon is for current or prospective user groups of large hybrid CPU-GPU systems to send teams of at least 3 developers along with either (1) a (potentially) scalable application that could benefit from GPU accelerators, or (2) an application running on accelerators that need optimization. There will be intensive mentoring during this 5-day hands-on workshop, with the goal that the teams leave with applications running on GPUs, or at least with a clear roadmap of how to get there.”