MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

High Performance Computing GPU Accelerators in Deep Learning


Training the neural networks used in deep learning is an ideal task for GPUs because GPUs can perform many calculations at once (parallel calculations), meaning the training will take much less time than it used to take. More GPUs means more computational power so if a system has multiple GPUs, it can compute data much faster than a system with CPUs only, or a system with a CPU and a single GPU. One Stop System’s High Density Compute Accelerator is the densest GPU expansion system to date.

Moab Powers Dynamic Resource Sharing at HPC4Health in Canada


Today Adaptive Computing announced that they have fully deployed Moab 8.1 at the HPC4Health consortium in Canada. “The folks at Adaptive Computing helped us create the technology to build a converged data center that dynamically shares resources securely and allows us to account for the workloads used by each organization involved in the HPC4Health venture.”

Cray Opens EMEA Research Lab in Bristol


Today Cray announced the creation of the Cray Europe, Middle East and Africa (EMEA) Research Lab. The Cray EMEA Research Lab will foster the development of deep technical collaborations with key customers and partners, and will serve as the focal point for the Company’s technical engagements with the European HPC ecosystem.

Call for Benchmark Proposals: SC16 Student Cluster Competition


SC16 has issued a Call for Proposals for a new initiative that aims to integrate aspects of past technical papers into the Student Cluster Competition.

Video: Prologue O/S – Improving the Odds of Job Success


“When looking to buy a used car, you kick the tires, make sure the radio works, check underneath for leaks, etc. You should be just as careful when deciding which nodes to use to run job scripts. At the NASA Advanced Supercomputing Facility (NAS), our prologue and epilogue have grown almost into an extension of the O/S to make sure resources that are nominally capable of running jobs are, in fact, able to run the jobs. This presentation describes the issues and solutions used by the NAS for this purpose.”

Case Study: PBS Pro on a Large Scale Scientific GPU Cluster


Professor Taisuke Boku from the University of Tsukuba presented this talk at the PBS User Group. “We have been operating a large scale GPU cluster HA-PACS with 332 computation nodes equipped with 1,328 GPUs managed by PBS Professional scheduler. The users are spread out across a wide variety of computational science fields with widely distributed resource sizes from single node to full-scale parallel processing. There are also several categories of user groups with paid and free scientific projects. It is a challenging operation of such a large system keeping high system utilization rate as well as keeping fairness over these user groups. We have successfully been keeping over 85%-90% of job utilization under multiple constraints.”

10 Reasons HPC Marketing Differs from B2B

Kim McMahon from McMahon Consulting

In this special guest feature, Kim McMahon from McMahon Consulting writes that, for High Performance Computing vendors, HPC Marketing is a completely different animal than B2B.

Evolution of NASA Earth Science Data Systems in the Era of Big Data


Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”

Video: DDN Infinite Memory Engine IME


Tommaso Cecchi from DDN presented this talk at the HPCAC Spain Conference. “IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance.”

ISC 2016 Issues Call for BoFs


ISC 2016 has issued its Call for BoFs. “Like-minded ISC High Performance conference attendees come together in our informal Birds-of-a-Feather (BoF) sessions to discuss current HPC topics, network and share their thoughts and ideas. Each 60-minute BoF session addresses a different topic and is led by one or more individuals with expertise in the area. The ISC 2016 BoF sessions will be held from Monday, June 20 through Wednesday, June 22.”