Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Bright Computing adds more than 100 new customers In 2019

Commercial enterprises, research universities and government agencies are turning to Bright Cluster Manager to reduce complexity and increase flexibility of their high-performance clusters. Along these lines, the company just announced the addition of more than 100 organizations to its client list in 2019, including AMD, Caterpillar, GlaxoSmithKline, Saab, Northrop Grumman, Trek Bicycles, Samsung, General Dynamics, Lockheed Martin and BAE, as well as 19 government agencies and 28 leading universities.

LBNL Breaks New Ground in Data Center Optimization

Berkeley Lab has been at the forefront of efforts to design, build, and optimize energy-efficient hyperscale data centers. “In the march to exascale computing, there are real questions about the hard limits you run up against in terms of energy consumption and cooling loads,” Elliott said. “NERSC is very interested in optimizing its facilities to be leaders in energy-efficient HPC.”

Job of the Week: HPC Systems Administrator at Washington State University

CIRC at Washington State University is seeking an HPC Systems Administrator in our Job of the Week. “Ideal candidates should have in-depth experience with the provisioning and administration of HPC clusters. Applicants who have experience with CentOS or RHEL, high speed networking using Mellanox Infiniband, resource schedulers such as Slurm, automation tools such as SaltStack, and parallel file systems including BeeGFS and Spectrum Scale, are highly encouraged to apply.”

Stanford Student Program Gives Supercomputers a Second Life

A novel program at Stanford is finding a second life for used HPC clusters, providing much-needed computational resources for research while giving undergraduate students a chance to learn valuable career skills. To learn more, we caught up with Dellarontay Readus from the Stanford High Performance Computing Center (HPCC).

Sylabs releases SingularityPRO 3.5

Today Sylabs announced the release of SingularityPRO 3.5, a popular container platform for HPC, supercomputing, and AI. “SingularityPRO 3.5, released January 21st, 2020, brings exciting new features to the long-term professionally supported version of the container platform. Based on the open source 3.5.2 release, SingularityPRO will receive security and bug fixes for 3 years, making it an ideal solution for the business-driven needs of enterprise customers containerizing their compute workloads.”

Univa Releases Navops Launch 2.0 for Migrating HPC & AI Workloads to the Cloud

Today Univa announced the general availability of Navops Launch 2.0, its flagship cloud-automation platform, designed to help enterprises simplify the migration of HPC and AI workloads to their choice of cloud. “With 9 out of 10 enterprises transitioning HPC workloads to the cloud, customers need proven solutions that simplify the migration of on-premise workloads to their choice of cloud,” said Rob Lalonde, Vice President and General Manager, Cloud, Univa.”

Simula Research Lab to Manage Heterogeneous HPC Platform with Bright Computing

Today, Bright Computing announced that Simula Research Laboratory has chosen Bright Cluster Manager to manage its multi-architecture, multi-OS HPC environment. “After a careful evaluation, Simula chose Bright Cluster Manager to provide comprehensive management of eX³, enabling the organization to administer its HPC platform as a single entity; provisioning the hardware, operating systems, and workload managers from a unified interface. Further, the intuitive Bright management console will allow Simula to see and respond to what’s happening in their cluster anywhere, at any time.”

Job of the Week: HPC User Support Technician at NREL

NREL is seeking an HPC User Support Technician in our Job of the Week. “Your job will be front-line support for the High Performance Computing user community which includes scientists, researchers, and students at many levels of education and experience.”

Podcast: Software Deployment and Continuous Integration for Exascale

In this Let’s Talk Exascale podcast,  Ryan Adamson from Oak Ridge National Laboratory describes how his role at the Exascale Computing Project revolves around software deployment and continuous integration at DOE facilities. “Each of the scientific applications that we have depends on libraries and underlying vendor software,” Adamson said. “So managing dependencies and versions of all of these different components can be a nightmare.”

Altair PBS Works Steps Up to Exascale and the Cloud

In this video from SC19, Sam Mahalingam from Altair describes how the company is enhancing PBS Works software to ease the migration of HPC workloads to the Cloud. “Argonne National Laboratory has teamed with Altair to implement a new scheduling system that will be employed on the Aurora supercomputer, slated for delivery in 2021. PBS Works runs big — 50,000 nodes in one cluster, 10,000,000 jobs in a queue, and 1,000 concurrent active users.”