Today Nvidia updated its GPU-accelerated deep learning software to accelerate deep learning training performance. With new releases of DIGITS and cuDNN, the new software provides significant performance enhancements to help data scientists create more accurate neural networks through faster model training and more sophisticated model design.
In this video, Kyle Lamb from the Infrastructure Team at Los Alamos National Lab describes the unique challenges he faces at a facility known for being at the forefront of technology. Kyle addresses the future of storage for High Performance Computing and the ways LANL is partnering with Seagate to tackle the changes on the horizon.
Over at Scientific Advances, a newly published paper describes a new high-efficiency computing paradigm called memcomputing. Modeled after the human brain, a memprocessor processes and stores information within the same units by means of their mutual interactions. Now, researchers have built a working prototype.
Bill Gropp from the University of Illinois at Urbana-Champaign presented this talk at the Blue Waters Symposium. “The large number of nodes and cores in extreme scale systems requires rethinking all aspects of algorithms, especially for load balancing and for latency hiding. In this project, I am looking at the use of nonblocking collective routines in Krylov methods, the use of speculation and large memory in graph algorithms, the use of locality-sensitive thread scheduling for better load balancing, and model-guided communication aggregation to reduce overall communication costs. This talk will discuss some current results and future plans, and possibilities for collaboration in evaluating some of these approaches.”