Manuel Arenaz from Appentra presented this talk at the OpenMP booth at SC16. “Parallware is a new technology for static analysis of programs based on the production-grade LLVM compiler infrastructure. Using a fast, extensible hierarchical classification scheme to address dependence analysis, it discovers parallelism and annotates the source code with the most appropriate OpenMP & OpenACC directives.”
In this video, Stefanos Kaxiras from Uppsala University presents: Thread and Memory Scaling Beyond Multicores. “This talk will present the ArgoDSM, a modern, highly-scalable, user-level, distributed shared memory system for clusters. While DSMs have their roots in high-performance computing where the scaling of threads (computation) is the goal, we are witnessing a tremendous interest in Big Data workloads where memory scaling is the target. I will describe how ArgoDSM bridges these two worlds and forms a vehicle for research in Big Data and HPC alike.”
In this video, CSCS celebrates its 25th anniversary of high performance computing in Switzerland.
“Over the past two years, InfiniCortex has clearly demonstrated that IB can perform over trans-continental distances, exploiting this technology to create a “Galaxy of Supercomputers,” a worldwide IB network spanning sites across Asia, Europe and North America. Initiated and led by A*STAR CRC in Singapore, the project hit its first major breakthrough at SC14, showcasing a first-time-ever 100G IB transcontinental connection from Singapore to the SC14 venue in New Orleans.”
Sven Oehme, Chief Research Strategist at IBM presented this talk at the DDN User Group. “Since 2007, DDN has sustained a highly strategic partnership with IBM to drive our mutual HPC technology vision to the next level. By leveraging a close working relationship with IBM, DDN provides the performance and capacity systems that help deliver IBM’s Spectrum Scale (formerly known as GPFS) into the most demanding environments.”
In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”
“New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training. Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD’s ROCm software to build the foundation of the next evolution of machine intelligence workloads.”
In this video from SC16, Dan Dowling from Penguin Computing describes the company’s momentum with Nine CTS-1 supercomputers on the TOP500. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories. The resulting deployment of these supercomputing clusters is among world’s largest Open Compute-based installations, a major validation of Penguin Computing’s leadership in Open Compute high-performance computing architecture.
“The pharmaceutical industry trend toward joint ventures and collaborations has created a need for new platforms in which to work together. We’ll dive into architectural decisions for building collaborative systems. Examples include how such a platform allowed Human Longevity, Inc. to accelerate software deployment to production in a fast-paced research environment, and how Celgene uses AWS for research collaboration with outside universities and foundations.”
Pamela Hill from NCAR/UCAR presented this talk at the DDN User Group at SC16. “With the game-changing SFA14K, NCAR now has the storage capacity and sustained compute performance to perform sophisticated modeling while substantially reducing workflow bottlenecks. As a result, the organization will be able to quickly process mixed I/O workloads while sharing up to 40 PBs of vital research data with a growing scientific community around the world.”