By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. “GPUs provide a lot of memory bandwidth,” Clark said. “Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker.” In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups.”
In this AI Podcast, Host Michael Copeland speaks with NVIDIA’s Will Ramey about the history behind today’s AI boom and the key concepts you need to know to get your head around a technology that’s reshaping the world. “AI has been described as ‘Thor’s Hammer’ and ‘the new electricity.’ But it’s also a bit of a mystery – even to those who know it best. We’ll connect with some of the world’s leading AI experts to explain how it works, how it’s evolving, and how it intersects with every facet of human endeavor.”
In this video from the Nvidia Booth at SC16, Jonathan Symonds from MapD presents: How GPUs are Remaking Cloud Computing. “This video discusses how price/performance characteristics of GPUs are changing the nature of cloud computing. The talk includes performance benchmarks on Google Cloud, Amazon Web Services and IBM Softlayer as well as a live demonstration.”
“The multidisciplinary research team and computational facilities –including MareNostrum– make BSC an international centre of excellence in e-Science. Since its establishment in 2005, BSC has developed an active role in fostering HPC in Spain and Europe as an essential tool for international competitiveness in science and engineering. The center manages the Red Española de Supercomputación (RES), and is a hosting member of the Partnership for Advanced Computing in Europe (PRACE) initiative.”
Today ORNL announced the full schedule of 2017 GPU Hackathons at multiple locations around the world. “The goal of each hackathon is for current or prospective user groups of large hybrid CPU-GPU systems to send teams of at least 3 developers along with either (1) a (potentially) scalable application that could benefit from GPU accelerators, or (2) an application running on accelerators that need optimization. There will be intensive mentoring during this 5-day hands-on workshop, with the goal that the teams leave with applications running on GPUs, or at least with a clear roadmap of how to get there.”
Applications such as machine learning and deep learning require incredible compute power, and these are becoming more crucial to daily life every day. These applications help provide artificial intelligence for self-driving cars, climate prediction, drugs that treat today’s worst diseases, plus other solutions to more of our world’s most important challenges. There is a multitude of ways to increase compute power but one of the easiest is to use the most powerful GPUs.
The Seventh International Workshop on Accelerators and Hybrid Exascale Systems (AsHES) has issued its Call for Papers. The event takes place May 29 in Orlando, Florida in conjunction with the IEEE International Parallel and Distributed Processing Symposium.
Congratulations go out to Sunita Chandrasekaran, assistant professor of computer science at the University of Delaware, who has won the 2016 IEEE-CS TCHPC Award for Excellence for Early Career Researchers in High Performance Computing. “Chandrasekaran’s research interests include programming accelerators (GPUs), exploring the suitability of high-level parallel programming models such as OpenMP and OpenACC for current and future platforms, and validating and verifying emerging directive-based parallel programming models.”
“New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training. Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD’s ROCm software to build the foundation of the next evolution of machine intelligence workloads.”
“The competition is an opportunity to showcase the world’s brightest computer science students’ expertise in a friendly, yet spirited competition,” said Martin Meuer, managing director of the ISC Group. “We are very pleased to host these 12 compelling university teams from around the world. We look forward to this very engaging competition and wish the teams good luck.”