Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Archives for January 2013

Video: Programming GPUs with Python

In this video from PyData NYC 2012, Andreas Klöckner from New York University presents a brief introduction to GPU programming with Python, including run-time code generation and use of high-level tools like PyCUDA, PyOpenCL, and

Cineca Super Resets the Bar on Energy Efficiency with Nvidia GPUs

Today Nvidia announced that the hybrid Eurora supercomputer at Cineca in Italy has set a new record for data center energy efficiency using Kepler GPUs. Built by Eurotech, the hot water-cooled Eurora system reached 3,150 megaflops per watt of sustained performance, which is 26 percent better than the top system on the most recent Green500 […]

NCSA Announces Initial Apps Running at Petascale

The Blue Waters supercomputer was originally billed as a “sustained petaflops supercomputer” and this week NCSA announced four application codes that are already running at Petascale. Four large-scale science applications (VPIC, PPM, QMCPACK and SPECFEM3DGLOBE) have sustained performance of 1 petaflop or more on the Blue Waters supercomputer, and the Weather Research & Forecasting (WRF) […]

Hengeveld: Looking Back at HPC in 2012, Looking Ahead to 2013

In this guest feature, Intel’s John Hengeveld reviews the past year and looks ahead to the industry challenges HPC is facing 2013. Happy New Year Everybody!  For me, 2012 was very exciting and very stressful. On the one hand I had family engagements, graduations, the launch of Intel® Xeon® E5 processors, the launch of Intel® […]

Adaptive Computing Powers Fastest and Most Efficient Supers on the Planet

This week Adaptive Computing announced that its workload management software powers the #1 systems on the TOP500 and Green500 lists. At #1 on the TOP500, the 17.59 Petaflop Titan supercomputer at Oak Ridge is a hybrid system powered by 299,008 AMD Opteron cores and 18,688 Tesla K20X GPUs. ORNL uses Adaptive Computing’s Moab HPC Suite […]

Using Supercomputers to Model Aortic Aneurysms

How will HPC power personalized medicine in the future? With help from XSEDE consulting and computing resources, researchers have developed finite-element computational protocols to assess of the risk for aortic rupture for individual patients, and thereby to help guide decisions about surgical intervention. We have software to make computational models from medical images of individual […]

How MIT's StarCluster Powers Virtualization for Cloud HPC

Over at Admin HPC, Gavin W. Burris writes that virtualization has become a viable option for researchers with a need for cluster computing power thanks in part to StarCluster, MIT’s open-source toolkit for launching, controlling, and orchestrating clusters of virtual servers within the Amazon Elastic Compute Cloud (EC2) service. StarCluster provides a number of images […]

Virtual Site Visit: HPC at RWTH Aachen University

In this video, Georg Schramm and Dieter an Mey describe supercomputing facilities at RWTH Aachen University in Germany. They also discuss the advantages of the Intel Cluster Ready program.

ATK Aerospace Accelerates Simulations with Panasas

Today Panasas announced that ATK (Alliant Techsystems, Inc.), has standardized on Panasas ActiveStor to help power its demanding research and product performance simulation processes. ATK is the world’s top producer of rocket propulsion systems and a leading supplier of military and commercial aircraft structures. It is crucial that ATK engineers use state-of-the art systems in […]

Mark Harris on Using Shared Memory in CUDA C/C++

Over at the Parallel for All blog, Mark Harris writes that Shared memory is a powerful feature for writing well optimized CUDA code. Access to shared memory is much faster than global memory access because it is located on chip. Because shared memory is shared by threads in a thread block, it provides a mechanism […]