MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Argonne Presents HPC Plans at ISC 2015

30395D036_crop_0

“In April 2015, the U.S. Department of Energy announced a $200 million supercomputing investment coming to Argonne National Laboratory. As the third of three Coral supercomputer procurements, the deal will comprise an 8.5 Petaflop “Theta” system based on Knights Landing in 2016 and a much larger 180 Petaflop “Aurora” supercomputer in 2018. Intel will be the prime contractor on the deal, with sub-contractor Cray building the actual supercomputers.”

Intel’s Raj Hazra on the Convergence of HPC & Big Data at ISC 2015

CKh4C9kUEAAPFuI

In this video from ISC 2015, Intel’s Raj Hazra explores how new innovations and Intel’s Scalable System Framework approach can maximize the potential in the new HPC era. Raj also shares details of upcoming Intel technologies, products and ecosystem collaborations that are powering these breakthroughs and ensuring technical computing continues to fulfill its potential as a scientific and industrial tool for discovery and innovation.

Video: CSCS Focuses on Sustainability at ISC 2015

cscs

In this video from ISC 2015, Michele De Lorenzi sits down with Rich Brueckner from insideHPC to discuss the latest updates from the conference and how the CSCS booth is constructed to reflect Switzerland’s focus on sustainability.

HPC Market Update from IDC

idc

In this video from ISC 2015, Earl Joseph and Bob Sorensen from IDC provide an HPC Market Update. Additional topics include: the HPC Market in Europe, the upcoming HPC User Forum, and an HPDA Update: Where Big Data Meets HPC.

Intel and Micron Announce 3D XPoint Non-Volatile Memory

3D Xpoint™ technology is up to 1000x faster than NAND and an individual die can store 128Gb of data.

Today Intel Corporation and Micron Technology unveiled 3D XPoint technology, a non-volatile memory that has the potential to revolutionize any device, application or service that benefits from fast access to large sets of data. Now in production, 3D XPoint technology is a major breakthrough in memory process technology and the first new memory category since the introduction of NAND flash in 1989.

Video: TotalView Parallelizes Code at ISC 2015

nic

“TotalView breaks down barriers to understanding what’s going on with your HPC and supercomputing applications. Purpose-built for multicore and parallel computing, TotalView provides a set of tools providing unprecedented control over processes and thread execution, along with deep visibility into program states and data.”

HP Doubles Down on HPC & Big Data at ISC 2015

Scott Misage, HP

In this video from ISC 2015, Scott Misage from HP describes the company’s new strategy for High Performance Computing. With a new business unit centered around HPC and Big Data, the company is poised to grow its market leadership with purpose-built systems like the liquid-cooled Apollo 8000 system.

Slidecast: IBM High Performance Services for Technical Computing in the Cloud

Slide1

In this slidecast, Chris Porter and Jeff Kamiol from IBM describe how IBM High Performance Services deliver versatile, application-ready clusters in the cloud for organizations that need to quickly and economically add computing capacity for high performance application workloads.

Video: Data Storage Infrastructure at Cyfronet

marek

“Cyfronet recently celebrated the launch of Poland’s fastest supercomputer. As the world’s largest deployment of the HP Apollo 8000 platform, the 1.68 Petaflop Prometheus system is powered by 41,472 Intel Haswell cores, 216 Terabytes of memory, and 10 Petabytes of storage.”

Numascale Powers Big Data Analytics with Transtec

numascale

“At ISC 2015, Numascale announced record-breaking results from a shared memory system running the McCalpin STREAM Benchmark, a synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple vector kernels. Numascale’s cache coherent shared memory system, which was targeted for big data analytics, reached 10.06 TBytes/second for the Scale function.”