Today Allinea announced plans to showcase its software tools for developing and optimizing high performance code at the GPU Technology Conference April 4-7 in San Jose. The company will highlight the best practices required to unleash the potential performance within the latest generation of NVIDIA GPUs for a wide range of software applications.
In this podcast, the Radio Free HPC team previews the GPU Technology Conference coming up April 4-7 in Silicon Valley. “GTC is the largest and most important event of the year for GPU developers. Join us this year as we showcase the most vital work in the computing industry today – Artificial Intelligence and Deep Learning, Virtual Reality and Self Driving Cars. GTC attracts developers, researchers, and technologists from some of the top companies, universities, research firms and government agencies from around the world.”
Today the OpenACC standards group announced a set of additional hackathons and a broad range of learning opportunities taking place during the upcoming GPU Technology Conference being held in San Jose, CA April 4-7, 2016. OpenACC is a mature and performance-portable path for developing scalable parallel programs across multi-core CPUs, GPU accelerators or many-core processors.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
Expected later in 2016, Intel will be releasing production versions of its Knights Landing (KNL) 72-core coprocessor. These next generation coprocessors are impacting the physical design of the supercomputers now coming down the pike in a number of ways. One of the most dramatic changes is the significant increase in cooling requirements – these are high wattage chips that run very hot and present some interesting engineering challenges for systems designers.
Zaikun Xu from the Università della Svizzera Italiana presented this talk at the Switzerland HPC Conference. “In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pioneered by several research groups, Deep learning is a renaissance of neural network in the Big data era.”
Today Nimbix and Bitfusion rolled out a new combined solution to offer more choices to application developers looking for high performance GPU accelerators on an on-demand basis. The Nimbix Cloud, powered by JARVICE, now integrates Bitfusion Boost to offer lower cost accelerator resources for developing compute hungry machine learning, analytics, and photorealistic rendering algorithms. “Nimbix has been about empowering developers to create accelerated applications in the cloud since day 1,” said Nimbix CTO Leo Reiter. “With this new combined solution, developers have more choices than ever before when it comes to performance and economics for the next generation of cloud computing workflows.”
Today Nvidia announced that Brookhaven National Laboratory has been named a 2016 GPU Research Center. “The center will enable Brookhaven Lab to collaborate with Nvidia on the development of widely deployed codes that will benefit from more effective GPU use, and in the delivery of on-site GPU training to increase staff and guest researchers’ proficiency,” said Kerstin Kleese van Dam, director of CSI and chair of the Lab’s Center for Data-Driven Discovery.
Today Nvidia announced that Rob High, IBM Fellow, VP and chief technology officer for Watson, will deliver a keynote at our GPU Technology Conference on April 6. High will describe the key role GPUs will play in creating systems that understand data in human-like ways. “Late last year, IBM announced that its Watson cognitive computing platform has added NVIDIA Tesla K80 GPU accelerators. As part of the platform, GPUs enhance Watson’s natural language processing capabilities and other key applications.”