Sign up for our newsletter and get the latest HPC news and analysis.


Supercomputing Black Hole Mergers with Blue Waters

Manuela Campanelli, Director of CCRG

“The mathematics involved in simulating these events is very sophisticated because one has to solve the equations of Einstein’s general relativity and magneto-hydrodynamics all together. The problem also requires very advanced supercomputers running programs on tens of thousands of CPUs simultaneously, and the use of sophisticated techniques for data extraction and visualization. Petascale numerical simulation is therefore the only tool available to accurately model these systems.”

Video: GPUs Power Simulation of the SpaceX Mars Rocket Engine

spacex

“SpaceX is designing a new, methane-fueled engine powerful enough to lift the equipment and personnel needed to colonize Mars. A vital aspect of this effort involves the creation of a multi-physics code to accurately model a running rocket engine. The scale and complexity of turbulent non-premixed combustion has so far made it impractical to simulate, even on today’s largest supercomputers. We present a novel approach using wavelets on GPUs, capable of capturing physics down to the finest turbulent scales.”

Video: 2015 Argonne State of the Lab Address

littlewood

“The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community. We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.”

Petascale Comet Supercomputer Enters Early Operations

comet

“Comet is really all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said SDSC Director Michael Norman, the project’s principal investigator. “Comet has been specifically configured to meet the needs of researchers in domains that have not traditionally relied on supercomputers to solve their problems.”

Video: HPC Transforms Parkinson’s Disease

Christopher R. Johnson

“By using high performance visualization systems, researchers at the Scientific Computing and Research Institute are using deep brain stimulation to treat several disabling neurological symptoms—most commonly the debilitating motor symptoms of Parkinson’s disease, such as tremor, rigidity, stiffness, slowed movement, and walking problems. The procedure reduces patient treatment time from four to five hours to less than 10 minutes. The result for the patient is restored movement and a more normal life.”

Video: Growth of Lustre Adoption and Intel’s Continued Commitment

Brent Gorda, Intel

“We are now working with over 100 channel partners globally. You can get access to Intel Lustre from almost everyone who sells storage or compute worldwide. We’re expanding this to include software partners, cloud partners. We want to create the best product possible out of this open source technology, and make it available economically to the channel partner, and enable you to go after these hugely expanding markets of cloud and big data, while not giving up on HPC.”

Interview: MEGWARE Gears up for ISC High Performance

Megware team. From left to right: A. Singer (Project Manager), S. Eckerscham (Managing Director), J. Gretzschel (Managing Director), J. Heydemüller (Representative).

MEGWARE in Germany celebrated its 25th anniversary in February. With the company in the midst of a big cluster deployment at CERN, we caught up Jörg Heydemüller from MEGWARE to learn what they have in store for the upcoming ISC High Performance conference in July.

Video: Intelligent Cache Hinting in Lustre

1573196

“Data caching can provide increased performance when using a mix of high and low performance storage, but traditional replacement algorithms like LRU may evict important data in multi-tenant environments, or in situations where the cache is “cold”. By tagging and prioritizing data within the storage system, we can create a more intelligent mechanism that avoids many of the problems inherent to traditional caching. Methods for prioritizing data and passing this information through the filesystem will be discussed, as well as a performance analysis of small file IO in Lustre with cache hinting, and possible future enhancements.”

Video: Understanding Hadoop Performance on Lustre

Hadoop Cluster

“In this talk, Seagate presents details on its efforts and achievements around improving Hadoop performance on Lustre including a summary on why and how HDFS and Lustre are different and how those differences affect Hadoop performance on Lustre compared to HDFS, Hadoop ecosystem benchmarks and best practices on HDFS and Lustre, Seagate’s open-source efforts to enhance performance of Lustre within “diskless” compute nodes involving core Hadoop source code modification (and the unexpected results), and general takeaways ways on running Hadoop on Lustre more rapidly.”