Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Intel Invests Upstream to Accelerate AI Innovation

In this Intel Chip Chat, Doug Fisher from Intel describes the company’s efforts to accelerate innovation in artificial intelligence. “Fisher talks about Intel’s upstream investments in academia and open source communities. He also highlights efforts including the launch of the Intel Nervana AI Academy aimed at developers, data scientists, academia, and startups that will broaden participation in AI. Additionally, Fisher reports on Intel’s engagements with open source ecosystems to optimize the performance of the most-used AI frameworks on Intel architecture.”

Podcast: Deep Learning 101

In this AI Podcast, Host Michael Copeland speaks with NVIDIA’s Will Ramey about the history behind today’s AI boom and the key concepts you need to know to get your head around a technology that’s reshaping the world. “AI has been described as ‘Thor’s Hammer’ and ‘the new electricity.’ But it’s also a bit of a mystery – even to those who know it best. We’ll connect with some of the world’s leading AI experts to explain how it works, how it’s evolving, and how it intersects with every facet of human endeavor.”

Podcast: Intel Doubles Down on Artificial Intelligence

In this Chip Chat podcast, Diane Bryant, EVP/GM for the Data Center Group at Intel, discusses how the company is driving the future of artificial intelligence by delivering breakthrough performance from best-in-class silicon, democratizing access to technology, and fostering beneficial uses of AI. Bryant also outlines her vision for AI’s ability to fundamentally transform the way businesses operate and people engage with the world. In a blog Krzanich said: “Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society.”

Podcast: LLNL’s Lori Diachin Reviews the SC16 Technical Program

“I think the most important thing I’d like people to know about SC16 is that it is a great venue for bringing the entire community together, having these conversations about what we’re doing now, what the environment looks like now and what it’ll look like in five, ten fifteen years. The fact that so many people come to this conference allows you to really see a lot of diversity in the technologies being pursued, in the kinds of applications that are being pursued – from both the U.S. environment and also the international environment. I think that’s the most exciting thing that I think about when I think about supercomputing.”

Radio Free HPC Year End Review of 2016 Predictions

In this podcast, the Radio Free HPC team looks at how Shahin Khan fared with his OrionX 2016 Technology Issues and Predictions. “Here at OrionX.net, we are fortunate to work with tech leaders across several industries and geographies, serving markets in Mobile, Social, Cloud, and Big Data (including Analytics, Cognitive Computing, IoT, Machine Learning, Semantic Web, etc.), and focused on pretty much every part of the “stack”, from chips to apps and everything in between. Doing this for several years has given us a privileged perspective. We spent some time to discuss what we are seeing and to capture some of the trends in this blog.”

Podcast: Intel Facilitating New Workloads by Democratizing HPC

In this Intel Chip Chat, Dr. Figen Ulgen from Intel discusses artificial intelligence workloads that are emerging as a result of greater access to high performance computing. “Noting that “wherever there is computational complexity, HPC can help,” Dr. Ulgen talks about the ways that technologies like voice recognition and natural language processing are growing more sophisticated as compute power increases. Dr. Ulgen additionally highlights Intel’s work with the OpenHPC-based Intel HPC Orchestrator, which promises to be an important step forward in making HPC more accessible to a broader array of customers.”

Radio Free HPC Looks at the Past and Future of the OS

In this podcast, the Radio Free HPC team looks at the future of Operating Systems in the new world of computing. In a world that seems to be moving to the cloud and microservices, what will happen to the monolithic OS we have come to know and love?

Podcast: Where Deep Learning Is Going Next

In this Nvidia podcast, Bryan Catanzaro from Baidu describes how machines with Deep Learning capabilities are now better at recognizing objects in images than humans. “AI gets better and better until it kind of disappears into the background,” says Catanzaro — NVIDIA’s head of applied deep learning research — in conversation with host Michael Copeland on this week’s edition of the new AI Podcast. “Once you stop noticing that it’s there because it works so well — that’s when it’s really landed.”

Radio Free HPC Reviews the SC16 Student Cluster Competition Configurations & Results

In this podcast, the Radio Free HPC team reviews the results from SC16 Student Cluster Competition. “This year, the advent of clusters with the new Nvidia Tesla P100 GPUs made a huge impact, nearly tripling the Linpack record for the competition. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery.”

Podcast: John McCalpin Surveys HPC System Memory Bandwidth

“In the long run, if you need orders of magnitude more bandwidth than is currently available there’s a set of technologies that are sometimes referred to as processor in memory – I call it processor at memory – technologies that involves cheaper processors distributed out to adjacent to the memory chips. Processors are cheaper, simpler, lower power. That could allow a significant reduction in cost to build the systems, which allows you to build them a lot bigger and therefore deliver significantly higher memory bandwidth. That’s a very revolutionary change.”