In this podcast, the Radio Free HPC team reviews the recent 2016 Intel Developer Forum. “How will Intel return to growth in the face of a declining PC market? At IDF, they put the spotlight on IoT and Machine Learning. With new threats rising from the likes of AMD and Nvidia, will Chipzilla make the right moves? Tune in to find out.”
Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”
There is still time to register for the 2016 Hot Interconnects Conference, which takes place August 24-26 at Huawei in Santa Clara, California. The keynote speaker this year is Kiran Makhijan, Senior Research Scientist, Network Technology Labs at the Huawei America Research Center. Her talk is entitled: Cloudcasting – Perspectives on Virtual Routing for Cloud Centric Network Architectures.
In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.
LANL reports that a moment of inspiration during a wiring diagram review has saved more than $2 million in material and labor costs for the Trinity supercomputer at Los Alamos National Laboratory.
Today One Stop Systems (OSS) introduced a pair of high-speed networked storage appliances that supports high-performance, shared storage services. “The OSS approach optimizes the hardware for the environment and optimizes the software for the application in the Flash Storage Array for Networks product line (FSAn). This hardware and software optimization in the FSAn product line provides the best ROI in any environment by minimizing hardware and license costs through advance array-level optimizations while maximizing the utilization of the flash array through VSI and VDI application support.”
“We’ve seen the rapid evolution of SSDs and have been contributing to the NVMe over Fabrics standard and community drivers,” said Michael Kagan, CTO at Mellanox Technologies. “Because faster storage requires faster networks, we designed the highest-speeds and most intelligent offloads into both our ConnectX-5 and BlueField families. This lets us connect many SSDs directly to the network at full speed, without the need to dedicate many CPU cores to managing data movement, and we provide a complete end-to-end networking solution with the highest-performing 25, 50, and 100GbE switches and cables as well.”
Today the Ethernet Alliance unveiled the agenda for its 2016 Technology Exploration Forum (TEF 2016). At the center of the day’s agenda is Ethernet’s quickening journey through its next decade of continuous technology evolution and growth as the marketplace continues to change. TEF 2016: The Road to Ethernet 2026 is scheduled for September 29, 2016, at the Santa Clara County Convention Center, Santa Clara, Calif.
“Fujitsu Laboratories has newly developed parallelization technology to efficiently share data between machines, and applied it to Caffe, an open source deep learning framework widely used around the world. Fujitsu Laboratories evaluated the technology on AlexNet, where it was confirmed to have achieved learning speeds with 16 and 64 GPUs that are 14.7 and 27 times faster, respectively, than a single GPU. These are the world’s fastest processing speeds(2), representing an improvement in learning speeds of 46% for 16 GPUs and 71% for 64 GPUs.”
Is Machine Learning more of a Data Movement problem than a Processing problem? In this podcast, the Radio Free HPC team looks at use cases for Machine Learning where data locality is critical for performance. “Most of the Machine Learning hearing stories we hear involve a central data repository. Henry says he is not hearing enough about how Machine Learning is going to deal with the problem of massive data streams from things like sensors. Such data, he contends, will have to be processed at the source.”