GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, analytics, engineering, consumer, and enterprise applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world. GPUs are accelerating applications in platforms ranging from cars, to mobile phones and tablets, to drones and robots.
Over at ORNL, Katie Elyce Jones writes that the US Department of Energy (DOE) is mining for alternatives to rare earth magnetic material, an obviously scarce resource. For manufacturers of electric motors and other devices, procuring these materials involves environmental concerns from mining rare earth metals, their costs, and an unpredictable supply chain.
Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”
Today the Gauss Centre for Supercomputing (GCS) announced on 13 November 2014 that its member centre High Performance Computing Center Stuttgart (HLRS) successfully completed the installation of HPC system “Hornet”. The new HLRS supercomputer, a CRAY XC40 system which delivers a peak performance of 3.8 PetaFlops, has been declared fully operational and will be available […]