In this video from LUG 2016 in Australia, Chakravarthy Nagarajan from Intel presents: An Optimized Entry Level Lustre Solution in a Small Form Factor. “Our goal was to provide an entry level Lustre storage solution in a high density form factor, with a low cost, small footprint, all integrated with Intel Enterprise Edition for Lustre* software.”
“Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance,” said Dr. Diamos. “The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications.”
The National Computational Infrastructure in Canberra, Australia’s national advanced computing facility, is the first Australian institution to deploy the latest generation of Intel Xeon Phi processors, formerly code named Knights Landing. “NCI is leading efforts in the scientific community to tune applications for Intel Xeon Phi processors,” explains Dr Muhammad Atif, NCI’s HPC Systems and Cloud Services Manager. “We have identified a large number of applications that will benefit from this hardware and software paradigm, including those applications in the domains of computational physics, computational chemistry and climate research.”
“Fortran has been proven to be extremely resilient to new developments that have appeared in other programming languages over the years. New versions continue to be available and associated with ANSI standards, so that an application written for one operating system should be able to be compiled and run with different compilers on different operating systems. The latest version is Fortran 2008, with the next version reportedly to be available as Fortran 2015, in 2018.”
Today TYAN announced support and availability of the NVIDIA Tesla P100, P40 and P4 GPU accelerators with the new NVIDIA Pascal architecture. Incorporating NVIDIA’s state-of-the-art technologies allows TYAN to offer the exceptional performance and data-intensive applications features to HPC users.
“EXAScaler 3.0 raises the bar for Lustre performance and management,” said Laura Shepard, senior director of products and vertical markets, DDN. “As the world’s most experienced Lustre provider, DDN leverages input from a broad installed base and the Lustre community to deliver the most advanced Lustre solutions to our customers around the globe.”
Vectorization and threading are critical to using such innovative hardware product such as the Intel Xeon Phi processor. Using tools early in the design and development processor that identify where vectorization can be used or improved will lead to increased performance of the overall application. Modern tools can be used to determine what might be blocking compiler vectorization and the potential gain from the work involved.
Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”
In this Intel Chip Chat podcast, Dr. Julie Krugler Hollek, co-organizer of PyLadies San Francisco and Data Scientist at Twitter, joins Allyson Klein to discuss efforts to democratize participation in open source communities and the future of data science. “PyLadies helps people who identify as women become participants in open source Python projects like The SciPy Stack, a specification that provides access to machine learning and data visualization tools.”
Humans are very good at visual pattern recognition especially when it comes to facial features and graphic symbols and identifying a specific person or associating a specific symbol with an associated meaning. It is in these kinds of scenarios where deep learning systems excel. Clearly identifying each new person or symbol is more efficiently achieved by a training methodology than by needing to reprogram a conventional computer or explicitly update database entries.