Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Advancing the Financial Services Industry Through Machine Learning

As financial institutions look to be empowered through machine learning, they should first acknowledge the benefits, challenges, and considerations involved. Download the new insideHPC guide that is essential reading for anyone involved in the financial services industry, from those who are beginning to explore the potential of machine learning, to those looking to expand and maximize its use. 

An Overview of AI in the HPC Landscape

The demand for performant and scalable AI solutions has stimulated a convergence of science, algorithm development, and affordable technologies to create a software ecosystem designed to support the data scientist. “It is very important to understand that time-to-model and the accuracy of the resulting model are really the only performance metrics that matter when training because the goal is to quickly develop a model that represents the training data with high accuracy.”

Video: SC17 Plenary on Smart Cities

We are very pleased to bring you this livestream of the SC17 Plenary session on Smart Cities. It starts right here Nov 13 at 5:30pm Mountain Time. “The Smart Cities initiative looks to improve the quality of life for residents using urban informatics and other technologies to improve the efficiency of services.”

Intel and the Coming AI Revolution

In this video from the Intel HPC Developer Conference, Gadi Singer from Intel describes how the company is moving forward with Artificial Intelligence. “We are deeply committed to unlocking the promise of AI: conducting research on neuromorphic computing, exploring new architectures and learning paradigms.”

Video: Applying AI to Science

In this video from the Intel HPC Developer Conference, Prabhat from NERSC describes how AI applies to science. “Looking ahead, Prabhat sees broad applications for deep learning in scientific research beyond climate science—especially in astronomy, cosmology, neuroscience, material science, and physics.”

Cray Helps Propels ARM processors into HPC

“With the integration of Arm processors into our flagship Cray XC50 systems, we will offer our customers the world’s most flexible supercomputers,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Adding Arm processors complements our system’s ability to support a variety of host processors, and gives customers a unique, leadership-class supercomputer for compute, simulation, big data analytics, and deep learning. Our software engineers built the industry’s best Arm toolset to maximize customer value from the system, which is representative of the R&D work we do every day to build on our leadership position in supercomputing.”

All about Baselining: RedLine Explains HPC Performance Methodology

In HPC we talk a lot about performance, and vendors are constantly striving to increase the performance of their components, but who out there is making sure that customers get the performance that they’re paying for? Well, according to their recently published ebook, a company called RedLine Performance Solutions has adopted that role with gusto.

insideHPC Special Report: AI-HPC is Happening Now

HPC and the data driven AI communities are converging as they are arguably running the same types of data and compute intensive workloads on HPC hardware, be it on a leadership class supercomputer, small institutional cluster, or in the cloud. Download the insideHPC Special Report, brought to you by Intel, to learn more about AI-HPC and how today’s businesses are using this technology.

Podcast: Optimizing Cosmos Code on Intel Xeon Phi

In this TACC podcast, Cosmos code developer Chris Fragile joins host Jorge Salazar for a discussion on how researchers are using supercomputers to simulate the inner workings of Black holes. “For this simulation, the manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance. This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Video: 25 Years of Supercomputing at Oak Ridge

Since its early days, the OLCF has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of DOE—contributing to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system for open science, the science community’s first petaflop system, and two top-ranked machines on the TOP500 list. The next chapter in the OLCF’s legacy is set to begin with the deployment of Summit, a pre-exascale system capable of more than five times the performance of Titan.”