Archives for September 2017

Common Myths Stalling Organizations From Cloud Adoption

Cloud adoption is accelerating at the blink of an eye, easing the burden of managing data-rich workloads for enterprises big and small. Yet, common myths and misconceptions about the hybrid cloud are delaying enterprises from reaping the benefits. “In this article, we will debunk five of the top most commonly believed myths that keep companies from strengthening their infrastructure with a hybrid approach.”

Nimbix Launches High Speed Cloud Storage for AI and Deep Learning

Today Nimbix announced the immediate availability of a new high-performance storage platform in the Nimbix Cloud specifically designed for the demands of artificial intelligence and deep learning applications and workflows. “As enterprises, researchers and startups begin to invest in GPU-accelerated artificial intelligence technologies and workflows, they are realizing that data is a big part of this challenge,” said Steve Hebert, CEO of Nimbix. “With the new storage platform, we are helping our customers achieve performance that breaks through the bottlenecks of commodity or traditional platforms and does so with a turnkey deep learning cloud offering.”

Cray Assimilates ClusterStor from Seagate

Today Cray announced it has completed the previously announced transaction and strategic partnership with Seagate centered around the addition of the ClusterStor high-performance storage business. “As a pioneer in providing large-scale storage systems for supercomputers, it’s fitting that Cray will take over the ClusterStor line.”

Kevin Barker to Lead CENATE Proving Ground for HPC Technologies

The CENATE Proving Ground for HPC Technologies at PNNL has named Kevin Barker as their new Director. “The goal of CENATE is to evaluate innovative and transformational technologies that will enable future DOE leadership class computing systems to accelerate scientific discovery,” said PNNL’s Laboratory Director Steven Ashby. “We will partner with major computing companies and leading researchers to co-design and test the leading-edge components and systems that will ultimately be used in future supercomputing platforms.”

IDEAS Program Fostering Better Software Development for Exascale

Scalability of scientific applications is a major focus of the Department of Energy’s Exascale Computing Project (ECP) and in that vein, a project known as IDEAS-ECP, or Interoperable Design of Extreme-scale Application Software, is also being scaled up to deliver insight on software development to the research community.

Computing Pioneer Gordon Bell to Present at SC17

Computing Pioneer Gordon Bell will share insights and inspiration at SC17 in Denver. “We are honored to have the legendary Gordon Bell speak at SC17,” said Conference Chair Bernd Mohr, from Germany’s Jülich Supercomputing Centre. “The prize he established has helped foster the rapid adoption of new paradigms, given recognition for specialized hardware, as well as rewarded the winners’ tremendous efforts and creativity – especially in maximizing the application of the ever-increasing capabilities of parallel computing systems. It has been a beacon for discovery and making the ‘might be possible’ an actual reality.”

Kathy Yelick Presents: Breakthrough Science at the Exascale

UC Berkeley professor Kathy Yelick presented this talk at the 2017 ACM Europe Conference. “Yelick’s keynote lecture focused on the exciting opportunities that High Performance Computing presents, the need for advanced in algorithms and mathematics to advance along with the system performance, and how the variety of workloads will stress the different aspects of exascale hardware and software systems.”

Radio Free HPC Looks at China’s 95 Petaflop Tianhe-2A Supercomputer

In this podcast, the Radio Free HPC team looks at China’s massive upgrade of the Tianhe-2A supercomputer to 95 Petaflops peak performance. “As detailed in a new 21-page report by Jack Dongarra from the University of Tennessee, the upgrade should nearly double the performance of the system, which is currently ranked at #2 on TOP500.”

LANL Steps Up to HPC for Materials Program

“Understanding and predicting material performance under extreme environments is a foundational capability at Los Alamos,” said David Teter, Materials Science and Technology division leader at Los Alamos. “We are well suited to apply our extensive materials capabilities and our high-performance computing resources to industrial challenges in extreme environment materials, as this program will better help U.S. industry compete in a global market.”

Scaling Deep Learning Algorithms on Extreme Scale Architectures

Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. “Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley’s NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing.”