Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Job of the Week: Senior Linux System Administrator at Yale

Yale University is seeking a Sr. Linux System Administrator in our Job of the Week. “In this role, you will work as a Linux senior administrator in ITS Systems Administration. Provide leadership in Linux server administration, for mission-critical services in a dynamic, 24/7 production data center environment.”

Addressing Computing Challenges at CERN openlab

In this special guest feature from Scientific Computing World, Robert Roe speaks with Dr Maria Girone, Chief Technology Officer at CERN openlab ahead of her keynote presentation at ISC High Performance. “The challenge of creating the largest particle accelerator is now complete but there is another challenge – harnessing all of the data produced through experimentation. This will become even greater when the ‘high-luminosity’ LHC experiments begin in 2026.”

Abstractions and Directives for Adapting Wavefront Algorithms to Future Architectures

Robert Searles from the University of Delaware gave this talk at PASC18. “Architectures are rapidly evolving, and exascale machines are expected to offer billion-way concurrency. We need to rethink algorithms, languages and programming models among other components in order to migrate large scale applications and explore parallelism on these machines. Although directive-based programming models allow programmers to worry less about programming and more about science, expressing complex parallel patterns in these models can be a daunting task especially when the goal is to match the performance that the hardware platforms can offer.”

Job of the Week: Research Scientist for HPC at Intel Labs

Intel in Silicon Valley is seeking a Research Scientist for HPC. “Intel Labs is seeking motivated researchers in the area of parallel and distributed computing research applied towards high performance computing and machine learning. This is a full-time position with the Parallel Computing Lab. The Parallel Computing Lab researches new algorithms, architectures, and approaches to address the most challenging compute- and data-intensive applications. We are focused on delivering new Intel software and hardware technologies that will transform the enterprise and technical computing experience. We work in close collaboration with leading academic and industry partners to accomplish our mission.”

IO500 List Showcases World’s Fastest Storage Systems for HPC

In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. “The IO500 benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. The list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data.”

From Weather Dwarfs to Kilometre-Scale Earth System Simulations

Nils P. Wedi from ECMWF gave this talk at PASC18. “The increasingly large amounts of data being produced b weather and climate simulations and earth system observations is sometimes characterised as a deluge. This deluge of data is both a challenge and an opportunity. The main opportunities are to make use of this wealth of data to 1) improve knowledge by extracting additional knowledge from the data and 2) to improve the quality of the models themselves by analysing the accuracy, or lack thereof, of the resultant simulation data.”

NEC Accelerates Machine Learning with Vector Computing

In this video from ISC 2018, Takeo Hosomi from NEC describes how vector computing can accelerate Machine Learning workloads. “Machine learning is the key technology for data analytics and artificial intelligence. Recent progress in this field opens opportunities for a wide variety of new applications. Our department has been at the forefront of developments in such areas as deep learning, support vector machines and semantic analysis for over a decade. Many of our technologies have been integrated in innovative products and services of NEC.”

ECP Launches ExaLearn Co-Design Center

The DOE’s Exascale Computing Project has initiated a new Co-Design Center called ExaLearn. Led by Principal Investigator Francis J. Alexander from Brookhaven National Laboratory, ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies. “Our multi-laboratory team is very excited to have the opportunity to tackle some of the most important challenges in machine learning at the exascale,” Alexander said. “There is, of course, already a considerable investment by the private sector in machine learning. However, there is still much more to be done in order to enable advances in very important scientific and national security work we do at the Department of Energy. I am very happy to lead this effort on behalf of our collaborative team.”

How DMTF and Redfish Ease System Administration

In this video from the Dell EMC HPC Community meeting, Alan Sill from Texas Tech University describes how DMTF and the Redfish project will ease system administration for HPC clusters. “DMTF’s Redfish is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). An open industry standard specification and schema, Redfish specifies a RESTful interface and utilizes defined JSON payloads – usable by existing client applications and browser-based GUI.”

Massive-Scale Analytics Applied to Real-World Problems

David Bader from Georgia Tech gave this talk at PASC18. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.”

Intel and Micron to Disband 3D XPoint Memory Partnership

Micron and Intel have announced that their partnership to develop 3D XPoint memory will be disbanded over the next 12 months. “The partnership will be disbanded once the second generation of the technology has been completed next year. Technology development beyond the second generation of 3D XPoint technology will be pursued independently by the two companies.”