In this video from KAUST, Steve Scott from at Cray explains where supercomputing is going and why there is a never-ending demand for faster and faster computers. Responsible for guiding Cray’s long term product roadmap in high-performance computing, storage and data analytics, Mr. Scott is chief architect of several generations of systems and interconnects at Cray.
In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. “We’ve seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit.”
“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”
“As with all new technology, developers will have to create processes in order to modernize applications to take advantage of any new feature. Rather than randomly trying to improve the performance of an application, it is wise to be very familiar with the application and use available tools to understand bottlenecks and look for areas of improvement.”
Jeffrey Welser from IBM Research Almaden presented this talk at the Stanford HPC Conference. “Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”
High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.
DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”
“Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016.”
Today Intel announced the open-source BigDL, a Distributed Deep Learning Library for the Apache Spark* open-source cluster-computing framework. “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project,” said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.
Frank Ham from Cascade Technologies presented this talk at the Stanford HPC Conference. “A spin-off of the Center for Turbulence Research at Stanford University, Cascade Technologies grew out of a need to bridge between fundamental research from institutions like Stanford University and its application in industries. In a continual push to improve the operability and performance of combustion devices, high-fidelity simulation methods for turbulent combustion are emerging as critical elements in the design process. Multiphysics based methodologies can accurately predict mixing, study flame structure and stability, and even predict product and pollutant concentrations at design and off-design conditions.”