Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Job of the Week: Senior HPC Administrator at DownUnder GeoSolutions

DownUnder GeoSolutions (America) is seeking a Senior HPC Administrator in our Job of the Week. “If you are passionate about leading edge technology you will love this role. What you will receive, along with a fantastic group of colleagues who will support and encourage you is the chance to be working on one of the largest private clusters in the world!”

Video: Customers Leverage HPE & Intel Alliance for HPC

“Meeting the Enterprise Challenges of HPC System developers and users face obstacles deploying complex new HPC technologies, such as: energy efficiency, reliability and resiliency requirements, or developing software to exploit HPC hardware. All can delay HPC adoption. But the HPC Alliance will help. HPE and Intel will collaborate with you on workstream integration, solution sizing and software-to-hardware integration. We will help, whether you seek ease of everything, to mix workloads, or have fast-growing, increasingly complex systems.”

NeSI in New Zealand Installs Pair of Cray Supercomputers

The New Zealand Science Infrastructure (NeSI) is commissioning a new HPC system that will be colocated at two facilities. “The new systems, provide a step change in power to NeSI’s existing services, including a Cray XC50 Supercomputer and a Cray CS400 cluster High Performance Computer, both sharing the same high performance and offline storage systems.”

Researchers Use TACC, SDSC and NASA Supercomputers to Forecast Corona of the Sun

Predictive Sciences ran a large-scale simulation of the Sun’s surface in preparation for a prediction of what the solar corona will look like during the eclipse. “The Solar eclipse allows us to see levels of the solar corona not possible even with the most powerful telescopes and spacecraft,” said Niall Gaffney, a former Hubble scientist and director of Data Intensive Computing at the Texas Advanced Computing Center. “It also gives high performance computing researchers who model high energy plasmas the unique ability to test our understanding of magnetohydrodynamics at a scale and environment not possible anywhere else.”

Video: Deep Learning on Azure with GPUs

In this video, you’ll learn how to start submitting deep neural network (DNN) training jobs in Azure by using Azure Batch to schedule the jobs to your GPU compute clusters. “Previously, few people had access to the computing power for these scenarios. With Azure Batch, that power is available to you when you need it.”

SC17 Panel Preview: How Serious Are We About the Convergence Between HPC and Big Data?

SC17 will feature a panel discussion entitled How Serious Are We About the Convergence Between HPC and Big Data? “The possible convergence between the third and fourth paradigms confronts the scientific community with both a daunting challenge and a unique opportunity. The challenge resides in the requirement to support both heterogeneous workloads with the same hardware architecture. The opportunity lies in creating a common software stack to accommodate the requirements of scientific simulations and big data applications productively while maximizing performance and throughput.

ORNL Readies Facility for 200 Petaflop Summit Supercomputer

Oak Ridge National Laboratory is moving equipment into a new high-performance computing center this month which is anticipated to become one of the world’s premier resources for open science computing. “There were a lot considerations to be had when designing the facilities for Summit,” explained George Wellborn, Heery Project Architect. “We are essentially harnessing a small city’s worth of power into one room. We had to ensure the confined space was adaptable for the power and cooling that is needed to run this next generation supercomputer.”

Video: HPC Powers Bioinformatics Research at Rockefeller University

In this video, researchers describe how the new HPC facility at Rockefeller University will power bioinformatics research and more. This is the first time that Rockefeller University has purpose-built a datacenter for high performance computing.

xDCI Infrastructure Manages 3D Brain Microscopy Images at RENCI

Researchers at RENCI are using xDCI Data CyberInfrastructure to manage brain microscopy images that were overwhelming the storage capacity at individual workstations. “BRAIN-I is a computational infrastructure for handling these huge images combined with a discovery environment where scientists can run applications and do their analysis,” explained Mike Conway, a senior data science researcher at RENCI. “BRAIN-I deals with big data and computation in a user-friendly way so scientists can concentrate on their science.”

Kathy Yelick to Keynote ACM Europe Conference

Kathy Yelick from LBNL will give the HPC keynote on Exascale computing at the upcoming ACM Europe Conference. With main themes centering on Cybersecurity and High Performance Computing, the event takes place Sept. 7-8 in Barcelona.