Sign up for our newsletter and get the latest HPC news and analysis.


SDSC Trestles Supercomputer to move to University of Arkansas

02-28trestles

SDSC’s recently decommissioned Trestles supercomputer is moving to the Arkansas High Performance Computing Center.

Interview: Intel’s Alan Gara Discusses the 180 Petaflop Aurora Supercomputer

Alan Gara, Intel

In this interview, Intel’s Alan Gara describes the Aurora system, a 180 Petaflop supercomputer coming to Argonne. “The Aurora system is based on our Omni-Path second generation. This is an Intel interconnect that we’ve been developing for some time now, and we’re really excited about the capabilities that we expect and scalability that we expect it to bring to high performance computing.”

Video: 2015 Argonne State of the Lab Address

littlewood

“The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community. We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.”

Seagate Combines Cloud, HPC, and Electronic Solutions Groups

seagate2015_2c_horizontal_pos

Today Seagate announced that it is combining its Cloud Storage, High Performance Computing, and Electronic Solutions groups to further align the company’s full breadth of enterprise storage hardware capabilities.

Video: KAUST Prepares for Shaheen II Supercomputer

dean

In this video, Mootaz Elnozahy from KAUST discusses the arrival of the Shaheen II supercomputer. Some 25 times more powerful than its predecessor, Shaheen II is a Cray XC40 system with DataWarp burst buffer technology, a Cray Sonexion 2000 storage system, and a Cray Tiered Adaptive Storage (TAS) system.

Petascale Comet Supercomputer Enters Early Operations

comet

“Comet is really all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said SDSC Director Michael Norman, the project’s principal investigator. “Comet has been specifically configured to meet the needs of researchers in domains that have not traditionally relied on supercomputers to solve their problems.”

Numerical Optimization for Deep Learning

phi

“With the advent of massively parallel computing coprocessors, numerical optimization for deep-learning disciplines is now possible. Complex real-time pattern recognition, for example, that can be used for self driving cars and augmented reality can be developed and high performance achieved with the use of specialized, highly tuned libraries. By just using the Message Passing Interface (MPI) API, very high performance can be attained on hundreds to thousands of Intel Xeon Phi processors.”

Video: HPC Transforms Parkinson’s Disease

Christopher R. Johnson

“By using high performance visualization systems, researchers at the Scientific Computing and Research Institute are using deep brain stimulation to treat several disabling neurological symptoms—most commonly the debilitating motor symptoms of Parkinson’s disease, such as tremor, rigidity, stiffness, slowed movement, and walking problems. The procedure reduces patient treatment time from four to five hours to less than 10 minutes. The result for the patient is restored movement and a more normal life.”

Benefits of RackCDU D2C for High Performance Computing

DC2 Liquid Cooling

From bio-engineering and climate studies to big data and high frequency trading, HPC is playing an even greater role in today’s society. Without the power of HPC, the complex analysis and data driven decisions that are made as a result would be impossible. Because these super computers and HPC clusters are so powerful, they are expensive to cool, use massive amounts of energy, and can require a great deal of space.

Multi-GPU Cluster to Power Deep Learning Research at NYU

cloud

Over at the Nvidia Blog, Kimberly Powell writes that New York University has just installed a new computing system for next generation deep learning research. Called “ScaLeNet,” the eight-node Cirrascale cluster is powered by 64 Nvidia Tesla K80 dual-GPU accelerators.