MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Programming Many Tasks for Many Cores

“Tasks keep the CPUs busy. When a core is working, rather than waiting for work to be sent to it, the application progresses towards it conclusion. A caveat to all of this is to remember that tasking and threading models remain on the system it was created on. Tasks that use a shared memory space only work within the shared memory segment that the processing cores can get to. Shared memory on the CPU side of the system is separate from the shared memory on the coprocessor. The threads created will remain on the part of the system where it started.”

Video: Rolling Out the New Intel Xeon Phi Processor at ISC 2016

In this video from ISC 2016, Barry Davis from Intel describes the company’s brand new Intel Xeon Phi Processor and how it fits into the Intel Scalable System Framework. “Eliminate node bottlenecks, simplify your code modernization and build on a power-efficient architecture with the Intel Xeon Phi™ processor, a foundational element of Intel Scalable System Framework. The bootable host processor offers an integrated architecture for powerful, highly parallel performance that will pave your path to deeper insight, innovation and impact for today’s most-demanding High Performance Computing applications, including Machine Learning. Supported by a comprehensive technology roadmap and robust ecosystem, the Intel Xeon Phi processor is a future-ready solution that maximizes your return on investment by using open standards code that are flexible, portable and reusable.”

Interview: Dr. Eng Lim Goh on the Latest Trends in High Performance Data Analytics

In this video from ISC 2016, Dr. Eng Lim Goh from SGI discusses the latest trends in high performance data analytics and machine learning. “Dr. Eng Lim Goh joined SGI in 1989, becoming a chief engineer in 1998 and then chief technology officer in 2000. He oversees technical computing programs with the goal to develop the next generation computer architecture for the new many-core era. His current research interest is in the progression from data intensive computing to analytics, machine learning, artificial specific to general intelligence and autonomous systems. Since joining SGI, he has continued his studies in human perception for user interfaces and virtual and augmented reality.”

Ingram Micro Artizen Soutions for HPC

In this video from the GPU Technology Conference, Rick Young from Ingram Micro describes the company’s Artizen HPC solutions. “Available now to channel partners in the U.S., the distributor’s new and exclusive line of Artizen High Performance Computing (HPC) offerings include turnkey high performance servers, ultimate workstations, and customizable supercomputing clusters, as well as computing integration and software installation services.”

Monitoring and Management Interfaces for GPU Devices in a Cluster Environment

“This presentation will provide an overview of the Nvidia Tesla Deployment Kit (TDK) from a user and a system administrator point of view. TDL contains Nvidia Management Library (NVML) and nvidia-healthmon–a tool for detecting and troubleshooting known GPU issues in a cluster environment. Usage models within a cluster environment will be presented along with a discussion on how existing resource management tools can be extended to improve allocation and accounting of GPU resources.”

Video: ClusterStor Update

“HPC storage solutions and futures continue to evolve as growth and performance requirements permeate every HPC market segment. Torben discusses these challenges and how the company’s storage solutions are addressing these shifting needs with new developments around disk drives, RAID, CIFS, security, small file handling, and other related technologies.”

Managing the GPUs of Your Cluster in a Flexible Way with rCUDA

In this talk, we introduce the rCUDA remote GPU virtualization framework, which has been shown to be the only one that supports the most recent CUDA versions, in addition to leverage the InfiniBand interconnect for the sake of performance. Furthermore, we also present the last developments within this framework, related with the use of low-power processors, enhanced job schedulers, and virtual machine environments.”

Troy Baer from NICS Wins Lifetime Achievement Adaptie Award

“Congratulations go out to Troy Baer, HPC system administrator at the National Institute for Computational Sciences (NICS), University of Tennessee. Troy Baer’s contributions in scheduling and resource management using Moab have helped Kraken—NICS’ flagship computing resource and the first academic computer to break the petaflop barrier—achieve outstanding 90-95% utilization rates since 2010. Baer’s administrative capabilities enable researchers in numerous scientific arenas, from climate to materials science to astrophysics, to achieve breakthroughs not yet possible on other resources. In November 2012, Baer helped NICS’ Beacon system secure a No. 1 ranking on the Green500 list of energy-efficient supercomputers.”

DK Panda Presents: Programming Models for Exascale Systems

“This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented. We will discuss challenges in designing runtime environments for these programming models by taking into account support for multi-core, high-performance networks, GPGPUs, Intel MIC, scalable collectives (multi-core-aware, topology-aware, and power-aware), non-blocking collectives using Offload framework, one-sided RMA operations, schemes and architectures for fault-tolerance/fault-resilience.”

High Performance Computing at CSCS

“With around 3.2 billion computer operations (3.2 gigaflops) per watt, the combination of GPUs CPUs makes “Piz Daint” one of the world’s most energy-­efficient supercomputers in the petaflop performance class.”