Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Job of the Week: Senior Linux System Administrator at Yale

Yale University is seeking a Sr. Linux System Administrator in our Job of the Week. “In this role, you will work as a Linux senior administrator in ITS Systems Administration. Provide leadership in Linux server administration, for mission-critical services in a dynamic, 24/7 production data center environment.”

Resource Management Across the Private/Public Cloud Divide

This is the final entry in a insideHPC series of features that explores new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article highlights how resource management systems that can manage clusters on-premises or in the cloud greatly simplify cluster management. That way, different tools do not have to be learned for managing a cluster based on whether it is located in the company data center or in the cloud.

Take the Exascale Resilience Survey from AllScale Europe

The European Horizon 2020 AllScale project has launched a survey on exascale resilience. “As we approach ExaScale, compute node failure will become commonplace. @AllScaleEurope wants to know how #HPC software developers view fault tolerance today, & how they plan to incorporate fault tolerance in their software in the ExaScale era.”

Supercomputing Graphene Applications in Nanoscale Electronics

Researchers at North Carolina State University are using the Blue Waters Supercomputer to explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing. “We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,” said Professor Jerry Bernholc, from North Carolina University. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.”

Agenda Posted: OpenPOWER 2018 Summit in Las Vegas

The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. “The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

Radio Free HPC Looks at Diverging Chip Architectures in the Wake of Spectre and Meltdown

In this podcast, the Radio Free HPC team looks at the tradeoff between chip performance and security. In the aftermath of the recently disclosed Spectre and Meltdown exploits, Cryptograpy guru Paul Kocher from Rambus is calling for a divergence in processor architectures:

HACC: Fitting the Universe inside a Supercomputer

Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. “In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one.”

Job of the Week: HPC Systems Engineer at Washington State University

The Center for Institutional Research Computing at Washington State University is seeking a High-Performance Computing Systems Engineer in our Job of the Week. “This position will play a vital role in the engineering and administration of HPC clusters used by the research community at Washington State University. This position is an exciting opportunity to participate in the frontiers of research computing through the selection, configuration, and management of HPC infrastructure including all computing systems, networking, and storage. This position is key to ensuring the high quality of service and performance of WSU’s research computing resources.”

Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind

In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company’s machine learning technology and some of the challenges ahead. “DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation.”

Adaptive Computing rolls out Moab HPC Suite 9.1.2

Today Adaptive Computing announced the release of Moab 9.1.2, an update which has undergone thousands of quality tests and includes scores of customer-requested enhancements. “Moab is a world leader in dynamically optimizing large-scale computing environments. It intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab’s unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery.”