Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


IBM Unveils Project DataWorks for AI-Powered Decision-Making

“We are at an inflection point in the big data era,” said Bob Picciano, senior vice president, IBM Analytics. “We know that users spend up to 80 percent of their time on data preparation, no matter the task, even when they are applying the most sophisticated AI. Project DataWorks helps transform this challenge by bringing together all data sources on one common platform, enabling users to get the data ready for insight and action, faster than ever before.”

Rogue Wave Improves Support for Open Source Software with IBM

Today Rogue Wave Software announced it is working with IBM to help make open source software (OSS) support more available. This will help provide comprehensive, enterprise-grade technical support for OSS packages. “With our ten-year history in open source, organizations can feel confident in our ability to resolve issues,” said Richard Sherrard, director of product management at Rogue Wave Software. “We have tier-3 and 4 enterprise architects that offer round-the-clock support for entire ecosystems. We are long-standing experts when it comes to OSS and proud to be working with IBM.”

ARM Releases CoreLink Interconnect

“The demands of cloud-based business models require service providers to pack more efficient computational capability into their infrastructure,” said Monika Biddulph, general manager, systems and software group, ARM. “Our new CoreLink system IP for SoCs, based on the ARMv8-A architecture, delivers the flexibility to seamlessly integrate heterogeneous computing and acceleration to achieve the best balance of compute density and workload optimization within fixed power and space constraints.”

Baidu Research Announces DeepBench Benchmark for Deep Learning

“Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance,” said Dr. Diamos. “The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications.”

Register Now for GPU Mini-Hackathon at ORNL Nov. 1-3

Oak Ridge National Lab is hosting a 3-day GPU Mini-hackathon led by experts from the OLCF and Nvidia. The event takes place Nov. 1-3 in Knoxville, Tennessee. “General-purpose Graphics Processing Units (GPGPUs) potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. This event will introduce you to GPU programming techniques.”

Earlham Institute Tests Green HPC from Verne Global in Iceland

“As more organizations turn to high performance computing to process large data sets, demand is growing for scalable and secure data centre solutions. The source, availability and reliability of the power grid infrastructure is becoming a critical factor in a data centre site selection decision,” said Jeff Monroe, CEO at Verne Global. “Verne Global is able to deliver EI a forward-thinking path for growth with a solution that combines unparalleled costs savings with operational efficiencies to support their data-intensive research.”

University of Tokyo to Deploy IME14K Burst Buffer on Reedbush Supercomputer

Today DDN Japan announced that the University of Tokyo and the Joint Center for Advanced High Performance Computing (JCAHPC) has selected DDN’s burst buffer solution “IME14K” for their new Reedbush supercomputer. “Many problems in science and research today are located at the intersections of HPC and Big Data, and storage and I/O are increasingly important components of any large compute infrastructure.”

Radio Free HPC Looks for the Forever Data Format

In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.

SGI and ANSYS Achieve New World Record in HPC

Over at the ANSYS Blog, Tony DeVarco writes that the company worked with SGI to break a world record for HPC scalability. “Breaking last year’s 129,024 core record by more than 16,000 cores, SGI was able to run the ANSYS provided 830 million cell gas combustor model from 1,296 to 145,152 CPU cores.This reduces the total solver wall clock time to run a single simulation from 20 minutes for 1,296 cores to a mere 13 seconds using 145,152 cores and achieving an overall scaling efficiency of 83%.”

Video: Using HPC to build Clean Energy Technologies

Maria Chan from NST presented this talk at Argonne Out Loud. “People eagerly anticipate environmental benefits from advances in clean energy technologies, such as advanced batteries for electric cars and thin-film solar cells. Optimizing these technologies for peak performance requires an atomic-level understanding of the designer materials used to make them. But how is that achieved? Maria Chan will explain how computer modeling is used to investigate and even predict how materials behave and change, and how researchers use this information to help improve the materials’ performance. She will also discuss the open questions, challenges, and future strategies for using computation to advance energy materials.”