Sign up for our newsletter and get the latest HPC news and analysis.

Video: GPU Acceleration – What’s Next?

buck

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world. GPUs are accelerating applications in platforms ranging from cars, to mobile phones and tablets, to drones and robots.

Video: Energy Secretary Moniz Announces 150 Petaflop Coral Supercomputers

moniz

In this video, U.S. Secretary of Energy Ernest Moniz announces two new High Performance Computing awards to put the nation on a fast-track to next generation exascale computing, which will help to advance U.S. leadership in scientific research and promote America’s economic and national security.

Looking at the Future of HPC in Australia

HPC14Australia

In this special guest feature from Scientific Computing World, Lindsay Botten and Neil Stringfellow explain how Australia has developed a national HPC strategy to address the country’s unique challenges in science, climate, and economic development.

Using the Titan Supercomputer to find Alternatives to Rare Earth Magnets

Simulations could uncover competitive substitutes for these super strong magnets

Over at ORNL, Katie Elyce Jones writes that the US Department of Energy (DOE) is mining for alternatives to rare earth magnetic material, an obviously scarce resource. For manufacturers of electric motors and other devices, procuring these materials involves environmental concerns from mining rare earth metals, their costs, and an unpredictable supply chain.

Yet Another Mountain: CSCS Readies Piz Dora Cray XC Supercomputer

Piz Dora, the extension of Cray XC system at CSCS

“This is an addition to our existing Cray XC platform, which we have called Piz Dora,” says CSCS media spokesperson Angela Detjen. Piz Dora has a maximum capability of 1.258 petaflops – a petaflop is the equivalent of 1,000,000,000,000,000 (a quadrillion) calculations per second.”

Video: With AWS, HPC Now Means ‘High Personal Computing’

aws

Since 2011, ONS (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes.

How the ‘C’ in HPC can now Stand for Cloud

HPC Cloud

Most IaaS (infrastructure as a service) vendors such as Rackspace, Amazon and Savvis use various virtualization technologies to manage the underlying hardware they build their offerings on. Unfortunately the virtualization technologies used vary from vendor to vendor and are sometimes kept secret. Therefore, the question about virtual machines versus physical machines for high performance computing (HPC) applications is germane to any discussion of HPC in the cloud.

Free eBook: Optimizing HPC Applications with Intel Cluster Tools

511vc41W6mL._BO2,204,203,200_PIsitb-sticker-v3-big,TopRight,0,-55_SX278_SY278_PIkin4,BottomRight,1,22_AA300_SH20_OU01_

“Optimizing HPC Applications with Intel Cluster Tools takes the reader on a tour of the fast-growing area of high performance computing and the optimization of hybrid programs. These programs typically combine distributed memory and shared memory programming models and use the Message Passing Interface (MPI) and OpenMP for multi-threading to achieve the ultimate goal of high performance at low power consumption on enterprise-class workstations and compute clusters. The book focuses on optimization for clusters consisting of the Intel Xeon processor, but the optimization methodologies also apply to the Intel Xeon Phi coprocessor and heterogeneous clusters mixing both architectures.”

John Barr on the Power and the Processor

John Barr

In this special guest feature from Scientific Computing World, John Barr surveys the technologies that will underpin the next generation of HPC processors and finds that software, not hardware, holds the key.

Slidecast: Cycle Computing Powers 70,000-core AWS Cluster for HGST

stowe

Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”