The Penn State Institute for CyberScience (ICS) is hosting a series of free training workshops on high-performance computing techniques. These workshops are sponsored by the Extreme Science and Engineering Discovery Environment (XSEDE). The first workshop will be 11 a.m. to 5 p.m on Jan.17 in 118 Wagner Building, University Park.
Today SimScale launched the SimScale Academic Program which brings cloud-based CAE software into universities, schools, and classrooms around the world. “SimScale is a new-generation CAE platform that supports Structural Mechanics, Fluid Dynamics, and Thermal Analysis. Students, Researchers, and Educators can now harness the power of the cloud to run engineering simulations on any laptop, anywhere.”
Today DDN announced that it has partnered with Synergy Solutions Management to offer organizations access to a first-of-its-kind facility in North America where users can plan, design and test video surveillance and high performance computing solutions and conduct training. The Synergy Innovations Lab, located near Vancouver, Canada, provides a fully-equipped testing lab that allows users to evaluate solutions within a mixed workload environment.
“The competition is an opportunity to showcase the world’s brightest computer science students’ expertise in a friendly, yet spirited competition,” said Martin Meuer, managing director of the ISC Group. “We are very pleased to host these 12 compelling university teams from around the world. We look forward to this very engaging competition and wish the teams good luck.”
“With demand for graduates with AI skills booming, we’ve released the NVIDIA Deep Learning Teaching Kit to help educators give their students hands on experience with GPU-accelerated computing. The kit — co-developed with deep-learning pioneer Yann LeCun, and largely based on his deep learning course at New York University — was announced Monday at the NIPS machine learning conference in Barcelona. Thanks to the rapid development of NVIDIA GPUs, training deep neural networks is more efficient than ever in terms of both time and resource cost. The result is an AI boom that has given machines the ability to perceive — and understand — the world around us in ways that mimic, and even surpass, our own.”
The DOD High Performance Computing Program and the U.S. Army Research Laboratory (ARL) hosted the Supercomputing Summer Institute July 18-29, 2016, at Aberdeen Proving Ground, Maryland. The program introduced students to high-tech learning opportunities.
Today Allinea Software announced that the compay will hold series of New Software Performance Briefings at SC16. The briefings will be held at the Allinea booth #1508 in Salt Lake City. This year at SC, we’re giving booth visitors the opportunity to find out more about what they don’t know about their software performance,” said […]
“This talk will provide empirical evidence from our Deep Speech work that application level performance (e.g. recognition accuracy) scales with data and compute, transforming some hard AI problems into problems of computational scale. It will describe the performance characteristics of Baidu’s deep learning workloads in detail, focusing on the recurrent neural networks used in Deep Speech as a case study. It will cover challenges to further improving performance, describe techniques that have allowed us to sustain 250 TFLOP/s when training a single model on a cluster of 128 GPUs, and discuss straightforward improvements that are likely to deliver even better performance.”
The third Workshop on Accelerator Programming Using Directives (WACCPD) has posted their meeting agenda. Held in conjunction with SC16, the WACCPD workshop takes place Nov. 14 in Salt Lake City. “To address the rapid pace of hardware evolution, developers continue to explore and add richer features to the various (parallel) programming standards. Domain scientists continue to explore the programming and tools space while preparing themselves for future Exascale systems. This workshop explores innovative language features – their implementations, compilation & runtime scheduling techniques, performance optimization strategies, autotuning tools exploring the optimization space and so on. WACCPD has been one of the major forums for bringing together the users, developers and tools community to share their knowledge and experiences of using directives and similar approaches to program emerging complex systems.”
Jack Dongarra presented this talk at the Argonne Training Program on Extreme-Scale Computing. “ATPESC provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”