Are supercomputers practical for Deep Learning applications? Over at the Allinea Blog, Mark O’Connor writes that a recent experiment with machine learning optimization on the Archer supercomputer shows that relatively simple models run at sufficiently large scale can readily outperform more complex but less scalable models. “In the open science world, anyone running a HPC cluster can expect to see a surge in the number of people wanting to run deep learning workloads over the coming months.”
“Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years.”
In this podcast, the Radio Free HPC team previews the SC16 Student Cluster Competition. To get us primed up, Dan gives us his impressions of the 14 teams competing this year. That’s a record number! “The Student Cluster Competition was developed in 2007 to immerse undergraduate and high school students in HPC. Student teams design and build small clusters, with hardware and software vendor partners, learn designated scientific applications, apply optimization techniques for their chosen architectures, and compete in a non-stop, 48-hour challenge.”
In this video from the HPC Advisory Council Spain Conference, Addison Snell from Intersect360 Research looks back over the past 10 years of HPC and provides predictions for the next 10 years. Intersect360 Research just released their Worldwide HPC 2015 Total Market Model and 2016–2020 Forecast.
In this podcast, the Radio Free HPC team looks at the new OpenCAPI interconnect standard. “Released this week by the newly formed OpenCAPI Consortium, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and can significantly improve server performance.”
Will this be the year of artificial intelligence, when the technology comes into its own for mainstream business? There are big pushes for AI in manufacturing, agriculture, healthcare and many other industry sectors. But why now? Please share your insights in our Reader Survey.
In this podcast, the Radio Free HPC team looks at the issue of security for Augmented Reality and IoT. Now that every device in our lives is getting connected to the Internet, how will be prevented from attackers? Henry points out that even our medical devices are not safe any more.
Over at Cluster Monkey, Douglas Eadline writes that the “free lunch” performance boost of Moore’s Law may indeed be back with the 1024-core Epiphany-V chip that will hit the market in the next few months.
This may indeed be the year of artificial intelligence, when the technology came into its own for mainstream businesses. “But will other companies understand if AI has value for them? Perhaps a better question is “Why now?” This question centers on both the opportunity and why many companies are scared about missing out.”
In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.