Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GE Research Leverages World’s Top Supercomputer to Boost Jet Engine Efficiency

GE Research has been awarded access to the world’s #1-ranked supercomputer to discover new ways to optimize the efficiency of jet engines and power generation equipment. Michal Osusky, the project’s leader from GE Research’s Thermosciences group, says access to the supercomputer and support team at OLCF will greatly accelerate learning insights for turbomachinery design improvements that lead to more efficient jet engines and power generation assets, stating, “We’re able to conduct experiments at unprecedented levels of speed, depth and specificity that allow us to perceive previously unobservable phenomena in how complex industrial systems operate. Through these studies, we hope to innovate new designs that enable us to propel the state of the art in turbomachinery efficiency and performance.”

Intel Horse Ridge Chip Addresses Key Barriers to Quantum Scalability

At the International Solid-State Circuits Conference this week, Intel presented a research paper demonstrating the technical details and experimental results of its new Horse Ridge cryogenic quantum computing control chip. “Building fault-tolerant, commercial-scale quantum computers requires a scalable architecture for both qubits and control electronics. Horse Ridge is a highly integrated System-on-a-Chip (SoC) that provides an elegant solution to enable control of multiple qubits with high fidelity—a major milestone on the path to quantum practicality.”

AMD EPYC Cloud Adoption Grows with Google Cloud

Today Google Cloud announced the beta availability of N2D VMs on Google Compute Engine powered by 2nd Gen AMD EPYC processors. The N2D family of VMs is a great option for customers running general purpose and high-performance workloads requiring a balance of compute and memory. “AMD and Google have worked together closely on these initial VMs to help ensure Google Cloud customers have a high-performance and cost-effective experience across a variety of workloads, and we will continue to work together to provide that experience this year and beyond.”

New Servers from Dell Technologies analyze data wherever it resides

Today Dell Technologies announced new solutions to help customers analyze data at the edge, outside of a traditional data center. With a host of new offerings—including new edge server designs, smaller modular data centers, enhanced telemetry management and a streaming analytics engine—customers are better positioned to realize the value of their data wherever it resides. “As we enter the next Data Decade, the challenge moves from keeping pace with volumes of data to gaining valuable insights from the many types of data and touchpoints across various edge locations to core data centers and public clouds,” said Jeff Boudreau, president, Infrastructure Solutions Group, Dell Technologies. “We offer a portfolio that’s engineered to help customers address the constraints of edge operations and deliver analytics for greater business insights wherever their edge may be.”

UK to establish Northern Intensive Computing Environment (NICE)

The N8 Centre of Excellence in Computationally Intensive Research, N8 CIR, has been awarded £3.1m from the Engineering and Physical Sciences Resources Council to establish a new Tier 2 computing facility in the north of England. This investment will be matched by £5.3m from the eight universities in the N8 Research Partnership which will fund operational costs and dedicated research software engineering support. “The new facility, known as the Northern Intensive Computing Environment or NICE, will be housed at Durham University and co-located with the existing STFC DiRAC Memory Intensive National Supercomputing Facility. NICE will be based on the same technology that is used in current world-leading supercomputers and will extend the capability of accelerated computing. The technology has been chosen to combine experimental, modelling and machine learning approaches and to bring these specialist communities together to address new research challenges.”

AMD Powers CARA Supercomputer from NEC in Dresden

The DLR German Aerospace Center dedicated its new CARA supercomputer in Dresden on February 5, 2020. With 1.746 Petaflops of performance on the Linpack benchmark, the AMD-powered system from NEC is currently rated #221 on the TOP500.  “With its almost 150,000 computing cores, CARA is one of the most powerful supercomputers available internationally for aerospace research,” said Prof. Markus Henke from TU Dresden.

UK to invest £1.2 billion for Supercomputing Weather and Climate Science

Today the UK announced plans to invest £1.2 billion for the world’s most powerful weather and climate supercomputer. The government investment will replace Met Office supercomputing capabilities over a 10-year period from 2022 to 2032. The current Met Office Cray supercomputers reach their end of life in late 2022. The first phase of the new supercomputer will increase the Met Office computing capacity by 6-fold alone.”

Isambard 2 at UK Met Office to be largest Arm supercomputer in Europe

The  UK Met Office  been awarded £4.1m by EPSRC to create Isambard 2, the largest Arm-based supercomputer in Europe. The powerful new £6.5m facility, to be hosted by the Met Office in Exeter and utilized by the universities of Bath, Bristol, Cardiff and Exeter, will double the size of GW4 Isambard, to 21,504 high performance cores and 336 nodes. “Isambard 2 will incorporate the latest novel technologies from HPE and new partner Fujitsu, including next-generation Arm CPUs in one of the world’s first A64fx machines from Cray.”

Predictions for HPC in 2020

In this special guest feature from Scientific Computing World, Laurence Horrocks-Barlow from OCF predicts that containerization, cloud, and GPU-based workloads are all going to dominate the HPC environment in 2020. “Over the last year, we’ve seen a strong shift towards the use of cloud in HPC, particularly in the case of storage. Many research institutions are working towards a ‘cloud first’ policy, looking for cost savings in using the cloud rather than expanding their data centres with overheads, such as cooling, data and cluster management and certification requirements.”

Video: Overview of HPC Interconnects

Ken Raffenetti from Argonne gave this talk at ATPESC 2019. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”