Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Bright Computing adds more than 100 new customers In 2019

Commercial enterprises, research universities and government agencies are turning to Bright Cluster Manager to reduce complexity and increase flexibility of their high-performance clusters. Along these lines, the company just announced the addition of more than 100 organizations to its client list in 2019, including AMD, Caterpillar, GlaxoSmithKline, Saab, Northrop Grumman, Trek Bicycles, Samsung, General Dynamics, Lockheed Martin and BAE, as well as 19 government agencies and 28 leading universities.

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning

The Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and three other companies. “It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.”

oneAPI: Single Programming Model to Deliver Cross-Architecture Performance

Bill Savage from Intel gave this talk at the Intel HPC Developer Conference. “Learn about oneAPI, the new Intel-led industry initiative to deliver a high-performance unified programming model specification spanning CPU, GPU, FPGA, and other specialized architectures. It includes the Data Parallel C++ cross-architecture language, a set of libraries, and a low-level hardware interface. Intel oneAPI Beta products are also available for developers who want to try out the programming model and influence its evolution.”

Swiss Conference & HPCXXL User Group Events Return to Lugano

The Swiss National Supercomputing Centre will host the 11th annual Swiss Conference and bi-annual HPCXXL Winter Meeting April 6-9 in Lugano, Switzerland. “Explore the domains and disciplines driving change and progress at an unprecedented pace and join us at the Swiss HPC Conference. Gather with fellow colleagues, a recognizable lineup of industry giants, startups, technology innovators and renowned subject matter experts to share insights on the tools, techniques and technologies that are bringing private and public research communities and interests together and inspiring entirely new possibilities.”

CUDA-Python and RAPIDS for blazing fast scientific computing

Abe Stern from NVIDIA gave this talk at the ECSS Symposium. “We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started.”

UK to establish Northern Intensive Computing Environment (NICE)

The N8 Centre of Excellence in Computationally Intensive Research, N8 CIR, has been awarded £3.1m from the Engineering and Physical Sciences Resources Council to establish a new Tier 2 computing facility in the north of England. This investment will be matched by £5.3m from the eight universities in the N8 Research Partnership which will fund operational costs and dedicated research software engineering support. “The new facility, known as the Northern Intensive Computing Environment or NICE, will be housed at Durham University and co-located with the existing STFC DiRAC Memory Intensive National Supercomputing Facility. NICE will be based on the same technology that is used in current world-leading supercomputers and will extend the capability of accelerated computing. The technology has been chosen to combine experimental, modelling and machine learning approaches and to bring these specialist communities together to address new research challenges.”

Predictions for HPC in 2020

In this special guest feature from Scientific Computing World, Laurence Horrocks-Barlow from OCF predicts that containerization, cloud, and GPU-based workloads are all going to dominate the HPC environment in 2020. “Over the last year, we’ve seen a strong shift towards the use of cloud in HPC, particularly in the case of storage. Many research institutions are working towards a ‘cloud first’ policy, looking for cost savings in using the cloud rather than expanding their data centres with overheads, such as cooling, data and cluster management and certification requirements.”

Visualizing an Entire Brain at Nanoscale Resolution

In this video from SC19, Berkeley researchers visualizes an entire brain at nanoscale resolution. The work was published in the journal, Science. “At the core of the work is the combination of expansion microscopy and lattice light-sheet microscopy (ExLLSM) to capture large super-resolution image volumes of neural circuits using high-speed, nano-scale molecular microscopy.”

Second GPU Cloudburst Experiment Paves the Way for Large-scale Cloud Computing

Researchers at SDSC and the Wisconsin IceCube Particle Astrophysics Center have successfully completed a second computational experiment using thousands of GPUs across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. “We drew several key conclusions from this second demonstration,” said SDSC’s Sfiligoi. “We showed that the cloudburst run can actually be sustained during an entire workday instead of just one or two hours, and have moreover measured the cost of using only the two most cost-effective cloud instances for each cloud provider.”

Distributed HPC Applications with Unprivileged Containers

Felix Abecassis and Jonathan Calmels from NVIDIA gave this talk at FOSDEM 2020. “We will present the challenges in doing distributed deep learning training at scale on shared heterogeneous infrastructure. At NVIDIA, we use containers extensively in our GPU clusters for both HPC and deep learning applications. We love containers for how they simplify software packaging and enable reproducibility without sacrificing performance.”