Sign up for our newsletter and get the latest HPC news and analysis.

Podcast: Michael Papka and Susan Coghlan on the 180 Petaflop Aurora Supercomputer

Aurora

In this Tech Shift podcast, Michael Papka and Susan Coghlan from Argonne National Laboratory discuss the 180 Petaflop Aurora supercomputer scheduled for deployment in 2018.

HPC Helps Solve Challenges of Personalized Medicine

Genomic Sequencing

A number of challenges exist for both the wider adoption of technologies that can impede personalized medicine workflows and the implementation of such systems. Learn more on how companies like Dell and Intel are delivering complete integrated genomic processing infrastructure.

New OSC Supercomputer Named After Civil Rights Activist Ruby Dee

Ruby_Dee

Today the Ohio Supercomputer Center unveiled their new 144 Teraflop Ruby supercomputer. Powered by Intel Xeon Phi, the 144 Teraflop HP system is named after Cleveland-born activist Ruby Dee.

A Closer Look at Intel’s Coral Supercomputers Coming to Argonne

coral

This morning Intel and the U.S. Department of Energy announced a $200 million supercomputing investment coming to Argonne National Laboratory. As the third of three Coral supercomputer procurements, the deal will comprise an 8.5 Petaflop “Theta” system based on Knights Landing in 2016 and a much larger 180 Petaflop “Aurora” supercomputer in 2018. Intel will be the prime contractor on the deal, with sub-contractor Cray building the actual supercomputers.

Intel to Deliver Nation’s Most Powerful Supercomputer at Argonne

Congressman Dan Lipinski

Today Intel announced that the company will deliver two next-generation supercomputers to Argonne National Laboratory. “The contract is part of the DOE’s multimillion dollar initiative to build state-of-the-art supercomputers at Argonne, Lawrence Livermore and Oak Ridge National Laboratories that will be five to seven times more powerful than today’s top supercomputers.”

Optimizing Chilled Water Systems at ORNL

Chillers at the OLCF help keep supercomputers running efficiently by cooling water to 42 degrees. Recent findings indicate the OLCF can raise the temperature of water running through Titan to 48 degrees.

Staff members at Oak Ridge National Laboratory are evaluating how supercomputers can be cooled more efficiently.

Celebrating Two Years of Blue Waters Supercomputing at NCSA

sidebar_140407_kirk_panel2

This week NCSA celebrated two years of Blue Waters supercomputing in an event convened by U.S. Senator Mark Kirk. The powerful Cray supercomputer is used by scientists and engineers across the country to tackle challenging research for the benefit of science and society.

Video: Applications Performance Optimizations – Best Practices

Pak Lui

“To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance.”

Open Compute Solutions

HPC Guide to Open Computing - Thumb

The Open Compute Project partners with leading CPU vendors such as Intel, AMD and ARM-based vendors to create reference designs that may be used by board and system vendors. These designs are bare-bones systems, with expansion options designed in for other types of I/O and storage. The reference design from Intel (REF) is 6.5 inches wide and 20 inches deep. These dimensions allow for three servers to be placed side by side in a newly designed Open Compute rack, increasing density.

Video: Datacenter Computers – Modern Challenges in CPU Design

Dick Sites, Senior Staff Engineer at Google

“Computers used as datacenter servers have usage patterns that differ substantially from those of desktop or laptop computers. We discuss four key differences in usage and their first-order implications for designing computers that are particularly well-suited as servers: data movement, thousands of transactions per second, program isolation, and measurement underpinnings. Maintaining high-bandwidth data movement requires coordinated design decisions throughout the memory system, instruction-issue system, and even instruction set.”