HPC and Beer have always had a certain affinity ever since the days when Cray Research would include a case of Leinenkugel’s with every supercomputer. Now, Brian Caulfield from Nvidia writes that a Pennsylvania startup is using GPUs and Deep Learning technologies to enable brewers to make better beer.
“I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model and applications. I also will describe future efforts in collaboration with DOE to build, literally, a “brain-in-a-box”. The work was built on simulations conducted on Lawrence Livermore National Laboratory’s Dawn and Sequoia HPC systems in collaboration with Lawrence Berkeley National Laboratory.”
The Distributed European Computing Initiative (DECI) in Europe has issued its 13th Call for Proposals for HPC Compute Resources. “Administered by PRACE, DECI enables European researchers to obtain access to the most powerful national (Tier-1) computing resources in Europe regardless of their country of origin or employment and to enhance the impact of European science and technology at the highest level.”
In this video from the SF Big Analytics Meetup, Bryan Catanzaro from Baidu presents: Why is HPC so important to AI? “We built Deep Speech because we saw the opportunity to re-conceive speech recognition in light of the new capabilities afforded by Deep Learning, to take advantage of even larger datasets to solve even harder problems.”
In this video from the 2015 OLCF User Meeting, Buddy Bland from Oak Ridge presents: Present and Future Leadership Computers at OLCF. “As the home of Titan, the fastest supercomputer in the USA, OLCF has an exciting future ahead with the 2017 deployment of the Summit supercomputer. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017.”
Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”
“Within the next 12 months, China expects to be operating not one but two 100 Petaflop computers, each containing (different) Chinese-made processors, and both coming on stream about a year before the United States’ 100 Petaflop machines being developed under the Coral initiative. Ironically, the CPU for one machine appears very similar to a technology abandoned by the USA in 2007, and the US Government, through its export embargo, has encouraged China to develop its own accelerator for the other machine.”
“OpenCL is a fairly new programming model that is designed to help programmers get the most out of a variety of processing elements in heterogeneous environments. Many benchmarks that are available have demonstrated that excellent performance can be obtained over a wide variety of devices. Rather than lock an application into one specific accelerator, by using OpenCL, applications can be run over on a number of different architectures with each showing excellent speedups over a native (host cpu) implementation.”