Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: The Legion Programming Model

“Developed by Stanford University, Legion is a data-centric programming model for writing high-performance applications for distributed heterogeneous architectures. Legion provides a common framework for implementing applications which can achieve portable performance across a range of architectures. The target class of users dictates that productivity in Legion will always be a second-class design constraint behind performance. Instead Legion is designed to be extensible and to support higher-level productivity languages and libraries.”

All about Baselining: RedLine Explains HPC Performance Methodology

In HPC we talk a lot about performance, and vendors are constantly striving to increase the performance of their components, but who out there is making sure that customers get the performance that they’re paying for? Well, according to their recently published ebook, a company called RedLine Performance Solutions has adopted that role with gusto.

HPE Teams up with DDN for High Speed Storage

Today DDN announced it has entered into a partnership with global high-performance computing leader Hewlett Packard Enterprise. Under the agreement, HPE will integrate DDN’s parallel file system storage and flash storage cache technology with HPE’s HPC platforms. The focus of the partnership is to accelerate and simplify customers’ workflows in technical computing, artificial intelligence and machine learning environments. “With this partnership, two trusted leaders in the high-performance computing market have come together to deliver high value solutions as well as a wealth of technical field expertise for customers with data intensive needs,” said Paul Bloch, president, DDN.

Panasas Doubles Metadata Performance on ActiveStor Scaleout NAS

Today Panasas introduced the next generation of its ActiveStor scaleout NAS solution, capable of scaling capacity to 57PB and delivering 360GB/s of bandwidth. This flexible system doubles metadata performance to cut data access time in half, scales performance and capacity independently, and seamlessly adapts to new technology advancements.

Supermicro Powers Advanced Analytics at NASA NCCS

Today Supermicro announced that the company has partnered with the NASA Center for Climate Simulation (NCCS) to expand advanced computing and data analytics used to study the Earth, solar system and universe. Based on the combination of density, system performance and optimized cost, the Supermicro FatTwin-based solution brings an additional 1.56 PetaFlops to NASA researchers. The Rack Scale solution is factory integrated at Supermicro’s Silicon Valley headquarters to deliver optimal reliability and efficiency.

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”

Cray Deploys Pair of Supercomputers in Canada for Weather Forecasting

Today Shared Services Canada (SSC) dedicated a pair of Cray supercomputers in Quebec. The new HPC systems will be used by the Environment and Climate Change Canada (ECCC) to improve the accuracy and timeliness of weather warnings and forecasts. “Accurate and timely weather forecasting helps us protect our homes and businesses in the face of extreme storms and tornadoes, which are getting worse due to climate change. By supporting quality weather forecasts and warnings, the new High Performance Computers will help protect Canadians for years to come.”

Radio Free HPC Talks Optimization with RedLine Performance Solutions

In this podcast, the Radio Free HPC teams discusses performance optimization with Carolyn Pasti and Don Avart from Red Line Performance Solutions. The company is partnering with Radio Free HPC on Project Cyclops, an effort to build the world’s fastest single node on the HPCG benchmark. Listen in as Don and Carolyn share their methodology for workload performance optimization and what it takes to make clusters really perform up their potential in the real world.

Volunteers Ready High Speed SCinet for SC17

At SC17 in Denver, volunteers have already started the installation of SCinet, the high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet takes one year to plan, and those efforts culminate in a month-long period of staging, setup and operation of the network during the conference.

Agenda Posted for D-Wave Quantum Seminar & Livestream at SC17

D-Wave Systems will hold a Quantum Computing Seminar & Livestream from 2:00pm – 5:00pm on Monday, Nov. 13 in Denver. “We will discuss quantum computing, the D-Wave 2000Q system and software, the growing software ecosystem, an overview of some user projects, and how quantum computing can be applied to problems in optimization, machine learning, cyber security, and sampling.”