Sign up for our newsletter and get the latest HPC news and analysis.

Time to Retire the Turing Test?

MV5BNDkwNTEyMzkzNl5BMl5BanBnXkFtZTgwNTAwNzk3MjE@._V1_SY317_CR0,0,214,317_AL_

If you haven’t heard, there is a new film about Alan Turing, one of the original computer scientists to ponder the question: Can machines think? Over at Kill Screen, David Shimomura writes that it may be time to put the Turing Test to bed.

Video: Energy Secretary Moniz Announces 150 Petaflop Coral Supercomputers

moniz

In this video, U.S. Secretary of Energy Ernest Moniz announces two new High Performance Computing awards to put the nation on a fast-track to next generation exascale computing, which will help to advance U.S. leadership in scientific research and promote America’s economic and national security.

Looking at the Future of HPC in Australia

HPC14Australia

In this special guest feature from Scientific Computing World, Lindsay Botten and Neil Stringfellow explain how Australia has developed a national HPC strategy to address the country’s unique challenges in science, climate, and economic development.

Using the Titan Supercomputer to find Alternatives to Rare Earth Magnets

Simulations could uncover competitive substitutes for these super strong magnets

Over at ORNL, Katie Elyce Jones writes that the US Department of Energy (DOE) is mining for alternatives to rare earth magnetic material, an obviously scarce resource. For manufacturers of electric motors and other devices, procuring these materials involves environmental concerns from mining rare earth metals, their costs, and an unpredictable supply chain.

Yet Another Mountain: CSCS Readies Piz Dora Cray XC Supercomputer

Piz Dora, the extension of Cray XC system at CSCS

“This is an addition to our existing Cray XC platform, which we have called Piz Dora,” says CSCS media spokesperson Angela Detjen. Piz Dora has a maximum capability of 1.258 petaflops – a petaflop is the equivalent of 1,000,000,000,000,000 (a quadrillion) calculations per second.”

Free eBook: Optimizing HPC Applications with Intel Cluster Tools

511vc41W6mL._BO2,204,203,200_PIsitb-sticker-v3-big,TopRight,0,-55_SX278_SY278_PIkin4,BottomRight,1,22_AA300_SH20_OU01_

“Optimizing HPC Applications with Intel Cluster Tools takes the reader on a tour of the fast-growing area of high performance computing and the optimization of hybrid programs. These programs typically combine distributed memory and shared memory programming models and use the Message Passing Interface (MPI) and OpenMP for multi-threading to achieve the ultimate goal of high performance at low power consumption on enterprise-class workstations and compute clusters. The book focuses on optimization for clusters consisting of the Intel Xeon processor, but the optimization methodologies also apply to the Intel Xeon Phi coprocessor and heterogeneous clusters mixing both architectures.”

John Barr on the Power and the Processor

John Barr

In this special guest feature from Scientific Computing World, John Barr surveys the technologies that will underpin the next generation of HPC processors and finds that software, not hardware, holds the key.

Slidecast: Cycle Computing Powers 70,000-core AWS Cluster for HGST

stowe

Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”

PGI Steps up with Support for Jetson TK1 and Power8

wolfe

At SC14, Nvidia announced that it is developing an enhanced version of the widely used PGI optimizing compilers which will allow developers to quickly develop new applications or run Linux x86-based GPU-accelerated applications on IBM POWER CPU systems with minimal effort.

Quanta Showcases Monster 6 Terabyte Memory Server for Big Data at SC14

chang

In this video from SC14, Alan Chang from Quanta describes the company’s new QuantaGrid Q71L-4U four-socket system with 96 DIMM sockets for a capacity of 6 Terabytes of memory. The big-memory system is tallor-made for large analytic and HPC workloads.