Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Rice Oil & Gas Conference in March to look for ways to meet HPC Demand

Rice University’s Ken Kennedy Institute will host the 12th annual Rice Oil and Gas High Performance Computing Conference (OGHPC) in Houston, Texas on March 4 – 6, 2019. “With the end of Moore’s law, challenges are mounting around a rapidly changing technology landscape,” said Jan E. Odegard, Executive Director of the Ken Kennedy Institute. “The end of one era is an opportunity for advancements and the beginning of a new era – a renaissance for system architectures that highlights the need for investments in workforce, algorithms, software, and hardware to support system scalability.”

Supercomputing Cleaner Power Plants

Researchers are looking to HPC to help engineer cost-effective carbon capture and storage technologies for tomorrow’s power plants. “By combining new algorithmic approaches and a new software infrastructure, MFiX-Exa will leverage future exascale machines to optimize CLRs. Exascale will provide 50 times more computational science and data analytic application power than is possible with DOE high-performance computing systems such as Titan at the Oak Ridge Leadership Computing Facility (OLCF) and Sequoia at Lawrence Livermore National Laboratory.”

Supercomputing Sea Fog Development to Prevent Maritime Disasters

Over at the XSEDE blog, Kim Bruch from SDSC writes that an international team of researchers is using supercomputers to shed new light on how and why a particular type of sea fog forms. Through for simulation, they hope to provide more accurate fog predictions that help reduce the number of maritime mishaps. “The researchers have been using the Comet supercomputer based at the San Diego Supercomputer Center (SDSC) at UC San Diego. To-date, the team has used about 2 million core hours.”

Industry Leaders prepare for Rice University Oil and Gas Conference in March

The upcoming Rice University Oil and Gas HPC Conference will focus on the computational challenges and needs in the Energy industry. The event takes place March 4-6, 2019 in Houston. “High-end computing and information technology continues to stand out across the industry as a critical business enabler and differentiator with a relatively well understood return on investment. However, challenges such as constantly changing technology landscape, increasing focus on software and software innovation, and escalating concerns around workforce development still remain. The agenda for the conference includes invited keynote and plenary speakers, parallel sessions made up of at least four presentations each and a student poster session.”

Researchers Gear Up for Exascale at ECP Meeting in Houston

Scientists and Engineers at Berkeley Lab are busy preparing for Exascale supercomputing this week at the ECP Annual Meeting in Houston. With a full agenda running five days, LBL researchers will contribute Two Plenaries, Five Tutorials, 15 Breakouts and 20 Posters. “Sponsored by the Exascale Computing Project, the ECP Annual Meeting centers around the many technical accomplishments of our talented research teams, while providing a collaborative working forum that includes featured speakers, workshops, tutorials, and numerous planning and co-design meetings in support of integrated project understanding, team building and continued progress.”

IBM Weather System to Improve Forecasting Around the World

Earlier this week, IBM unveiled a powerful new global weather forecasting system that will provide the most accurate local weather forecasts ever seen worldwide. The new IBM Global High-Resolution Atmospheric Forecasting System (GRAF) will be the first hourly-updating commercial weather system that is able to predict something as small as thunderstorms globally. Compared to existing models, it will provide a nearly 200% improvement in forecasting resolution for much of the globe (from 12 to 3 sq km). It will be available later this year.

Supercomputing Dark Energy Survey Data through 2021

Scientists’ effort to map a portion of the sky in unprecedented detail is coming to an end, but their work to learn more about the expansion of the universe has just begun. “Using the Dark Energy Camera, a 520-megapixel digital camera mounted on the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory in Chile, scientists on DES took data for 758 nights over six years. Over those nights, the survey generated 50 terabytes (that’s 50 trillion bytes) of data over its six observation seasons. That data is stored and analyzed at NCSA. Compute power for the project comes from NCSA’s NSF-funded Blue Waters Supercomputer, the University of Illinois Campus Cluster, and Fermilab.”

Video: Flying through the Universe with Supercomputing Power

In this video from SC18, Mike Bernhardt from the Exascale Computing Project talked with Salman Habib of Argonne National Laboratory about cosmological computer modeling and simulation. Habib explained that the ExaSky project is focused on developing a caliber of simulation that will use the coming exascale systems at maximal power. Clearly, there will be different types of exascale machines,” he said, “and so they [DOE] want a simulation code that can use not just one type of computer, but multiple types, and with equal efficiency.”

Machine Learning Award Powers Engine Design at Argonne

Over at Argonne, Jared Sagoff writes that automotive manufacturers are leveraging the power of DOE supercomputers to simulate the combustion engines of the future. “As part of a partnership between the Argonne, Convergent Science, and Parallel Works, engine modelers are beginning to use machine learning algorithms and artificial intelligence to optimize their simulations. Now, this alliance recently received a Technology Commercialization Fund award from the DOE to complete this important project.”

Video: Fusion Research on the Summit Supercomputer

In this video, C.S. Chang from the Princeton Plasma Physics Laboratory describes how his team is using the GPU-powered Summit supercomputer to simulate and predict plasma behavior for the next fusion reactor. “By using Summit, Chang’s team expects its highly scalable XGC code, a first-principles code that models the reactor and its magnetically confined plasma, could be simulated 10 times faster than current supercomputers allow. Such a speedup would give researchers an opportunity to model more complicated plasma edge phenomena, such as plasma turbulence and particle interactions with the reactor wall, at finer scales, leading to insights that could help ITER plan operations more effectively.”