Today General Atomics announced the next generation of Nirvana – a premier metadata, data placement and data management software solution for the most demanding workflows in Life Sciences, Scientific Research, Media & Entertainment and Energy Exploration. “Nirvana 5.0 reduces storage costs up to 75% by turning geographically dispersed, multiple vendor storage silos into a single global namespace that automatically moves infrequently-accessed data to lower-cost storage or to the cloud.”
“Science problems are becoming increasingly complex in all areas from physics and bioinformatics to engineering,” said Siegfried Hoefinger, High Performance Computing Specialist at VSC explains. “Bigger is better, but inefficiency will always limit what you can achieve. The Allinea tools will enable us to quickly establish the root cause of bottlenecks and understand the markers for inefficient code. By doing so we’re helping to prove the case for modernization, can start to eliminate inefficiencies and exploit latent capacity to its full effect.”
Today ACM and IEEE Computer Society named Bill Gropp from NCSA as the recipient of the 2016 ACM/IEEE Computer Society Ken Kennedy Award for highly influential contributions to the programmability of high performance parallel and distributed computers. The award will be presented at SC16 in Salt Lake City.
Researchers at the Future Technologies Group at Oak Ridge National Laboratory (ORNL) have developed a novel programming system that extends C with intuitive, language-level support for programming NVM as persistent, high-performance main memory; the prototype system is named NVL-C.
Two University of Wyoming graduate students earned a trip to the SC16 conference in November by virtue of winning the poster contest at the recent Rocky Mountain Advanced Computing Consortium (RMACC) High Performance Computing Symposium. “I hope to receive good exposure to the most recent advancements in the field of high-performance computing,” Kommera says.
In this video from the 2016 Argonne Training Program on Extreme-Scale Computing, Mark Miller from LLNL leads a panel discussion on Experiences in eXtreme Scale in HPC with FASTMATH team members. “The FASTMath SciDAC Institute is developing and deploying scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborating with U.S. Department of Energy (DOE) domain scientists to ensure the usefulness and applicability of our work. The focus of our work is strongly driven by the requirements of DOE application scientists who work extensively with mesh-based, continuum-level models or particle-based techniques.”
Today Nvidia announced the general availability of CUDA 8 toolkit for GPU developers. “A crucial goal for CUDA 8 is to provide support for the powerful new Pascal architecture, the first incarnation of which was launched at GTC 2016: Tesla P100,” said Nvidia’s Mark Harris in a blog post. “One of NVIDIA’s goals is to support CUDA across the entire NVIDIA platform, so CUDA 8 supports all new Pascal GPUs, including Tesla P100, P40, and P4, as well as NVIDIA Titan X, and Pascal-based GeForce, Quadro, and DrivePX GPUs.”
Today Allinea Software announces availability of its new software release, version 6.1, which offers full support for programming parallel code on the Pascal GPU architecture, CUDA 8 from Nvidia. “The addition of Allinea tools into the mix is an exciting one, enabling teams to accurately measure GPU utilization, employ smart optimization techniques and quickly develop new CUDA 8 code that is bug and bottleneck free,” said Mark O’Connor, VP of Product Management at Allinea.
Today Rogue Wave Software announced it is working with IBM to help make open source software (OSS) support more available. This will help provide comprehensive, enterprise-grade technical support for OSS packages. “With our ten-year history in open source, organizations can feel confident in our ability to resolve issues,” said Richard Sherrard, director of product management at Rogue Wave Software. “We have tier-3 and 4 enterprise architects that offer round-the-clock support for entire ecosystems. We are long-standing experts when it comes to OSS and proud to be working with IBM.”
“Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance,” said Dr. Diamos. “The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications.”