Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

DOE Helps Tackle Biology’s Big Data

Six proposals have been selected to participate in a new partnership between two U.S. Department of Energy (DOE) user facilities through the “Facilities Integrating Collaborations for User Science” (FICUS) initiative. The expertise and capabilities available at the DOE Joint Genome Institute (JGI) and the National Energy Research Scientific Computing Center (NERSC) – both at the Lawrence Berkeley National Laboratory (Berkeley Lab) – will help researchers explore the wealth of genomic and metagenomic data generated worldwide through access to supercomputing resources and computational science experts to accelerate discoveries.

Cryo-EM Moves Forward with $9.3M NIH Award

The National Institutes of Health (NIH) has awarded $9.3 million to the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) to support ongoing development of PHENIX, a software suite for solving three-dimensional macromolecular structures. “The impetus behind PHENIX is a desire to make the computational aspects of crystallography more automated, reducing human error and speeding solutions,” said PHENIX principal investigator Paul Adams, director of Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division.”

Supercomputers turn the clock back on Storms with “Hindcasting”

Researchers are using supercomputers at LBNL to determine how global climate change has affected the severity of storms and resultant flooding. “The group used the publicly available model, which can be used to forecast future weather, to “hindcast” the conditions that led to the Sept. 9-16, 2013 flooding around Boulder, Colorado.”

Surprising Stories from 17 National Labs in 17 Minutes

In this video, the U.S. Department of Energy gives a quick tour of all 17 National Labs. Each one comes with a surprising story on what these labs do for us as a Nation. “And they all do really different stuff. Think of a big scientific question or challenge, and one or more of the labs is probably working on it.”

Go with Intel® Data Analytics Acceleration Library and Go*

Use of the Go* programming language and it’s developer community has grown significantly since it’s official launch by Google in 2009. Like many popular programming languages (C and Java come to mind), Go started as an experiment to design a new programming language that would fix some of the common problems of other languages and yet stay true to the basic tenets of modern programming: be scalable, productive, readable, enable robust development environments, and support networking and multiprocessing.

ANSYS Scales to 200K Cores on Shaheen II Supercomputer

Today ANSYS, Saudi Aramco, and KAUST announced a new supercomputing milestone by scaling ANSYS Fluent to nearly 200,000 processor cores – enabling organizations to make critical and cost-effective decisions faster and increase the overall efficiency of oil and gas production facilities. This supercomputing record represents a more than 5x increase over the record set just three years ago, when Fluent first reached the 36,000-core scaling milestone. “Today’s regulatory requirements and market expectations mean that manufacturers must develop products that are cleaner, safer, more efficient and more reliable,” said Wim Slagter, director of HPC and cloud alliances at ANSYS. “To reach such targets, designers and engineers must understand product performance with higher accuracy than ever before – especially for separation technologies, where an improved separation performance can immediately increase the efficiency and profitability of an oil field. The supercomputing collaboration between ANSYS, Saudi Aramco and KSL enabled enhanced insight in complex gas, water and crude-oil flows inside a separation vessel, which include liquid free-surface, phase mixing and droplets settling phenomena.”

BSC Comparing Algorithms that Search for Cancer Mutations

Eduard Porta-Pardo from BSC has undertaken the first ever comparative analysis of sub-gene algorithms that mine the genetic information in cancer databases. These powerful data-sifting tools are helping untangle the complexity of cancer, and find previously unidentified mutations that are important in creating cancer cells. “Finding new cancer driver genes is an important goal of cancer genome analysis,” adds Porta-Pardo. This study should help researchers understand the advantages and drawbacks of sub-gene algorithms used to find new potential drug targets for cancer treatment.

Supercomputing RNA Structure at Argonne

Over at ALCF, Joan Koka writes that researchers at the National Cancer Institute are using Argonne supercomputers to advance disease studies by enhancing our understanding of RNA, biological polymers that are fundamentally involved in health and disease. “Getting the real functional structure, which is the 3-D structure, is very difficult to do experimentally, because the RNA polymer is too flexible,” he said. “This is why we rely on computational simulation. Simulations can be used to explore hundreds or thousands of possible conformational states that would eventually lead us to the most likely 3-D structure.”

Video: Unlocking the Mysteries of the Universe with Supercomputers

Katrin Heitmann from the University of Chicago presented this talk at PASC17. “In this talk I will introduce HACC, the Hardware/Hybrid Accelerated Cosmology Code, which is being developed to combat the tremendous computational challenge to simulate our Universe.” After the talk, she discusses Dark Matter with Rich Brueckner from insideHPC.

Automotive Simulation Center Stuttgart teams with Rescale

Today Rescale announced that it has become a full member of the Automotive Simulation Center Stuttgart, or asc(s. The asc(s is a non-profit organization promoting high-performance simulation in virtual vehicle development. It consists of automotive OEMs and suppliers, software and hardware manufacturers, engineering service providers, and research institutes. “Rescale is delighted to be accepted as a member of asc(s,” said Wolfgang Dreyer, Rescale’s EMEA General Manager. “asc(s provides a forum for simulation innovation across the European automotive sector and Rescale is enabling scalable, turnkey on-demand high-performance computing, all pivotal for making automotive simulation cost-effective, fast, and efficient. We look forward to working with the members of the association to better understand industry requirements and trends and to push the boundaries of automotive simulation.”