Sign up for our newsletter and get the latest HPC news and analysis.

Storage for Achieving Performance at Scale

Geoffrey Noer, VP Product Management, Panasas

Storage and data management have arguably become the most important HPC “pain points” already, with access densities a particularly troubling issue. Many HPC sites are doubling their storage capacities every two to three years, but adding capacity does not address the access density, data movement, and related storage issues many HPC buyers face. When this happens, your investments in processing, networking, middleware and applications are choked off by bottlenecks in your storage infrastructure. If you’re looking to maximize throughput of your technical computing infrastructure, storage performance often holds the key.

Inside the Sequoia Supercomputer at LLNL

sequoia

IBM Sequoia is a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the LLNL in 2011 and was fully deployed in June 2012. Sequoia is #3 on the TOP500 ranking of June 2014.

Job of the Week: Storage Engineer at Cray

Cray in St. Paul, MN is seeking a Storage Engineer in our Job of the Week.

Intel Rolls Out First 8-Core Desktop Processor

HSW-E-Die-Mapping-Hi-Res

Today Intel unveiled its first eight-core desktop processor, the Intel Core i7-5960X processor Extreme Edition, formerly code-named “Haswell-E,” targeted at power users who demand the most from their PCs.

Accelerating CFD with PyFr on GPUs

Flow over a spoiler deployed at 90 degrees to the oncoming flow, computed on a mesh with 1.3 billion degrees of freedom using 184 x Nvidia M2090 GPUs (Emerald HPC facility at the Centre for Innovation UK).

Over at TechEnablement, Dr. Peter Vincent writes that PyFR is an open-source 5,000 line Python based framework for solving fluid-flow problems that can exploit many-core computing hardware such as GPUs.

This Week in HPC: Cray Creates GPU Heavy Server Node and New Exascale Recommendations for the DOE

this week in hpc

In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Cray CS-Storm supercomputer based on Nvidia GPUs. After that, the discussion turns to exascale investment recommendations coming out of a new report from a Department of Energy Task Force.

Video: Preparing Your Application for Advanced Manycore Architectures

Katie Antypas,
Services Department Head, National Energy Research Scientific Computing Center, Lawrence Berkeley National Laboratory

“Confronting power limitations and the high cost of data movement, new supercomputing architectures within the DOE are requiring users make changes to application codes to achieve high performance. More specifically, users will need to exploit greater on-node parallelism and longer vector units, and restructure code to take advantage of memory locality. In this presentation you will learn about coming architectural trends and what you can do now to start preparing your application.”

Seagate Shipping 8TB Hard Drives for the Cloud

Seagate

This week Seagate announced that the company is shipping the world’s first 8TB hard disk drives. As a bulk storage drive for the Cloud market, the new device is optimized for cost per gigabyte vs. performance.

DOE Task Force Releases Recommendations for Exascale Investment

exascale

A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.

Bill Gropp on Engineering for Performance in HPC

Bill Gropp

Achieving good performance on any system requires balancing many competing factors. More than just minimizing communication (or floating point or memory motion), for high end systems the goal is to achieve the lowest cost solution. And while cost is typically considered in terms of time to solution, other metrics, including total energy consumed, are likely to be important in the future. Making effective use of the next generations of extreme scale systems requires rethinking the algorithms, the programming models, and the development process. This talk will discuss these challenges and argue that performance modeling, combined with a more dynamic and adaptive style of programming, will be necessary for extreme scale systems.