Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SKA and CERN Sign Big Data Agreement

“The signature of this collaboration agreement between two of the largest producers of science data on the planet shows that we are really entering a new era of science worldwide”, said Prof. Philip Diamond, SKA Director-General. “Both CERN and SKA are and will be pushing the limits of what is possible technologically, and by working together and with industry, we are ensuring that we are ready to make the most of this upcoming data and computing surge.”

Interview: Dr. Christoph Schär on Escaping the Data Avalanche for Climate Modeling

“There are large efforts towards refining the horizontal resolution of climate models to O(1 km) with the intent to represent convective clouds explicitly rather than using semi-empirical parameterizations. This refinement would move the governing equations closer to first principles and is expected to reduce the uncertainties of climate models. However, the output volume of climate simulations would dramatically grow, and storing it for later analysis would likely become impractical, due to limited I/O bandwidth and mass-storage capacity. In this presentation we discuss possible solutions to this challenge.”

Alan Turing Institute to Acquire Cray Urika-GX Graph Supercomputer

Today Cray announced the Company will provide a Cray Urika-GX system to the Alan Turing Institute. “The rise of data-intensive computing – where big data analytics, artificial intelligence, and supercomputing converge – has opened up a new domain of real-world, complex analytics applications, and the Cray Urika-GX gives our customers a powerful platform for solving this new class of data-intensive problems.”

Cycles Per Instruction – Why it matters

To compare how one version of a part of the code is running to another version, since this is a ratio, it is important to keep one of the values constant in order to understand if the optimization is working. If more cpu cycles are being used, but more instructions are being executed, then the ratio could be the same, but this measure will not show any improvement. The goal is to lower the CPI in certain parts of the code as well as the overall application.

Silicon Mechanics steps up with Intel Xeon Scalable Processors

Today Silicon Mechanics announced immediate availability of Intel’s new family of processors, the Intel Xeon Scalable platform, formerly code-named Purley. Intel’s newest processing platform features a selection of Intel Xeon processors designed to scale with a business as it grows, from an entry-level Bronze processor to the Intel Xeon Platinum processor for maximum performance, hardware-enhanced security, and advanced RAS (reliability, availability, and serviceability). “As a long-term Strategic OEM partner with Intel, we are excited to bring the Intel Xeon Scalable platform to our customers on day one,” said Silicon Mechanics Chief Marketing Officer Sue Lewis. “Our customers have been excited about the expected improvements in memory bandwidth and performance, and through our close-working partnership with Intel, we are ready to help them deploy systems based on the new processors now.”

New Intel Xeon Scalable Processors Boost HPC Performance

The new Intel Xeon Scalable Processors provide up to a 2x FLOPs/clock improvement1 with Intel AVX-512 as well as integrated Intel Omni-Path Architecture ports, delivering improved compute capability, I/O flexibility and memory bandwidth to accelerate discovery and innovation.

Developing a Software Stack for Exascale

In this special guest feature, Rajeev Thakur from Argonne describes why Exascale would be a daunting software challenge even if we had the hardware today. “The scale makes it complicated. And we don’t have a system that large to test things on right now.” Indeed, no such system exists yet, the hardware is changing, and a final vendor or possibly multiple vendors to build the first exascale systems have not yet been selected.”

Penguin Computing Announces Transition to Intel Xeon Scalable Processors

Today Penguin Computing announced completion of the company’s major technology transition to the Intel Xeon Scalable platform for all Penguin Computing product lines. “Penguin Computing’s server solution offers an unrivaled array of compute and storage form factors, in standard 19” EIA, Open Compute and Tundra Extreme Scale platform,” said William Wu, Director of Product Management, Penguin Computing. “We are excited to introduce Intel Xeon Scalable platform based solutions into our versatile Relion and Tundra product lines to tackle today’s computing challenges. Organizations looking to deploy across Data Centers, Cloud Computing, hyper-scale HPC and Deep Learning will find Penguin Computing’s unique and expanding solutions to meet their needs.”

Video: Computational Discovery in the 21st Century

Nicola Marzari from EPFL gave this public lecture at PASC17. “The talk offers a perspective on the current state-of-the-art in the field, its power and limitations, and on the role and opportunities for novel models of doing computational science – leveraging big data or artificial intelligence – to conclude with some examples on how quantum simulations are accelerating our quest for novel materials and functionalities.”

Video: Flash Poster Session at PASC17

In this video from PASC17, Maria Grazia Giuffreda (ETH Zurich / CSCS, Switzerland) moderates a Flash Poster Session. “The aim of this session is to allow poster presenters to introduce the topic of their poster and motivate the audience to visit them at the evening poster session. Authors will be strictly limited to 40 seconds each – after this time the presentation will be stopped automatically.”