MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Oclgrind – An Extensible OpenCL Device Simulator

Click on the image to play the video.

“We describe Oclgrind, a platform designed to enable the creation of developer tools for analysis and debugging of OpenCL programs. Oclgrind simulates how OpenCL kernels execute with respect to the OpenCL standard, adhering to the execution and memory models that it defines. A simple plugin interface allows developer tools to observe the simulation and collect execution information to provide useful analysis, or catch bugs that would be otherwise difficult to spot when running the application on a real device. We give details about the implementation of the simulator, and describe how it can be extended with plugins that provide useful developer tools. We also present several example use-cases that have already been created using this platform, motivated by real-world problems that OpenCL developers face.”

HPC Matters to Aerospace


In this video from the SC15 HPC Matters series, NASA Aerospace Engineer Dr. Shishir Pandya describes how high performance computing helps advance airplane and rocket technologies. “Why does high-performance computing matter? Because science matters! Discovery matters! Human beings are seekers, questers, questioners. And when we get answers, we ask bigger questions. HPC extends our reach, putting more knowledge, more discovery, and more innovation within our grasp. With HPC, the future is ours to create! HPC Matters!”

Video: Network Architecture Trends


Pavan Balaji from Argonne presented this talk at the Argonne Training Program on Extreme-Scale Computing. “More cores will drive the network, with more sharing of the network infrastructure. The aggregate amount of communication from each node will increase moderately, but will be divided into many smaller messages.”

Video: Beowulf Boot Camp Trains the Next Generation of HPC Experts


“This exciting course offers students and teachers a unique opportunity to work with advanced research technology not usually available in a typical classroom setting. Students will engage in the following activities: building a computer cluster from scratch; installing the Linux operating system on the computer they’ve built; connecting computers put together by their peers to make a mini-supercomputer; learning how to program a mini-supercomputer in parallel with Python; interactive activities to help understand how Parallel computing works in Supercomputing; running performance benchmarks to determine how your cluster ranks in comparison with the fastest and largest supercomputers in the world.”

James Reinders Presents: Vectorization (SIMD) and Scaling (TBB and OpenMP)

James Reinders

James Reinders from Intel presented this talk at the Argonne Training Program on Extreme-Scale Computing. “We need to embrace explicit vectorization in our programming. But, generally use parallelism first (tasks, threads, MPI, etc.).”

Heterogeneous On-Demand Storage for HPC Workflows in the Cloud


Leo Reiter from Nimbix presented this talk at the HPC User Forum. “Unlike conventional commodity cloud platforms, JARVICE and the Nimbix Cloud are purpose built to run any processing job at speed and scale. It means that as your problems get more complex, JARVICE simply expands to handle them.”

Asetek Continues Momentum with Largest Server Installation Order to Date


Today Asetek announced its biggest purchase order to date for its RackCDU data center liquid cooling system. The order was placed by an undisclosed Original Equipment Manufacturing partner. The order for 21 RackCDU with Direct-to-Chip cooling loops is to satisfy an undisclosed OEM customer installation. Both the OEM and the end user will be announced when the information becomes public.

Video: Prologue O/S – Improving the Odds of Job Success


“When looking to buy a used car, you kick the tires, make sure the radio works, check underneath for leaks, etc. You should be just as careful when deciding which nodes to use to run job scripts. At the NASA Advanced Supercomputing Facility (NAS), our prologue and epilogue have grown almost into an extension of the O/S to make sure resources that are nominally capable of running jobs are, in fact, able to run the jobs. This presentation describes the issues and solutions used by the NAS for this purpose.”

Case Study: PBS Pro on a Large Scale Scientific GPU Cluster


Professor Taisuke Boku from the University of Tsukuba presented this talk at the PBS User Group. “We have been operating a large scale GPU cluster HA-PACS with 332 computation nodes equipped with 1,328 GPUs managed by PBS Professional scheduler. The users are spread out across a wide variety of computational science fields with widely distributed resource sizes from single node to full-scale parallel processing. There are also several categories of user groups with paid and free scientific projects. It is a challenging operation of such a large system keeping high system utilization rate as well as keeping fairness over these user groups. We have successfully been keeping over 85%-90% of job utilization under multiple constraints.”

Evolution of NASA Earth Science Data Systems in the Era of Big Data


Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”