Agenda Posted for ExaComm 2018 Workshop in Frankfurt

The ExaComm 2018 workshop has posted their Speaker Agenda. Held in conjunction with ISC 2018, the Fourth International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale takes place June 28 in Frankfurt. ” The goal of this workshop is to bring together researchers and software/hardware designers from academia, industry and national laboratories who are involved in creating network-based computing solutions for extreme scale architectures. The objectives of this workshop will be to share the experiences of the members of this community and to learn the opportunities and challenges in the design trends for exascale communication architectures.”

Brain Research: A Pathfinder for Future HPC

Dirk Pleiter from the Jülich Supercomputing Centre gave this talk at the NVIDIA GPU Technology Conference. “One of the biggest and most exiting scientific challenge requiring HPC is to decode the human brain. Many of the research topics in this field require scalable compute resources or the use of advance data analytics methods (including deep learning) for processing extreme scale data volumes. GPUs are a key enabling technology and we will thus focus on the opportunities for using these for computing, data analytics and visualization. GPU-accelerated servers based on POWER processors are here of particular interest due to the tight integration of CPU and GPU using NVLink and the enhanced data transport capabilities.”

Video: Deep Learning for the Enterprise with POWER9

Sumit Gupta from IBM gave this talk at H2O World. “From chat bots, to recommendation engines, to Google Voice and Apple Siri, AI has begun to permeate our lives. We will demystify what AI is, present the difference between machine learning and deep learning, why the huge interest now, show some fun use cases and demos, and then discuss use cases of how deep learning based AI methods can be used to garner insights from data for enterprises. We will also talk about what IBM is doing to make deep learning and machine learning more accessible and useful to a broader set of data scientists, and how to build out the right hardware infrastructure.”

Video: IBM Launches POWER9 Nodes for the World’s Fastest Supercomputers

In this video from SC17, Adel El Hallak from IBM unveils the POWER9 servers that will form the basis of the world’s fastest “Coral” supercomputers coming to ORNL and LLNL. “In addition to arming the world’s most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry.”

Video: Introducing the 125 Petaflop Sierra Supercomputer

In this video, researchers from Lawrence Livermore National Laboratory describe Sierra, LLNL’s next-generation supercomputer. “The IBM-built advanced technology high-performance system is projected to provide four to six times the sustained performance and be at least seven times more powerful than LLNL’s current most advanced system, Sequoia, with a 125 petaFLOP/s peak. At approximately 11 megawatts, Sierra will also be about five times more power efficient than Sequoia.”

For HPC and Deep Learning, GPUs are here to stay

In this special guest feature from Scientific Computing World, David Yip, HPC and Storage Business Development at OCF, provides his take on the place of GPU technology in HPC. “Using GPUs in the HPC datacenter in place of CPUs can dramatically increase the power requirements needed, but if your computational performance goes through the roof, then I’d argue it’s a trade-off worth making.”

No speed limit on NVIDIA Volta with rise of AI

In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. “We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway” at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets.”

Benefits of Multi-rail Cluster Architectures for GPU-based Nodes

Craig Tierney from NVIDIA gave this talk at the MVAPICH User Group meeting. “As high performance computing moves toward GPU-accelerated architectures, single node application performance can be between 3x and 75x faster than the CPUs alone. Performance increases of this size will require increases in network bandwidth and message rate to prevent the network from becoming the bottleneck in scalability. In this talk, we will present results from NVLink enabled systems connected via quad-rail EDR Infiniband.”

IBM’s New PowerAI Software Speeds Deep Learning

IBM PowerAI on Power servers with GPU accelerators provide at least twice the performance of our x86 platform; everything is faster and easier: adding memory, setting up new servers and so on,” said current PowerAI customer Ari Juntunen, CTO at Elinar Oy Ltd. “As a result, we can get new solutions to market very quickly, protecting our edge over the competition. We think that the combination of IBM Power and PowerAI is the best platform for AI developers in the market today. For AI, speed is everything —nothing else comes close in our opinion.”

Anaconda Open Data Science Platform comes to IBM Cognitive Systems

Today IBM announced that it will offer the Anaconda Open Data Science platform on IBM Cognitive Systems. Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning that makes it simple and fast to take advantage of Power performance and GPU optimization for data intensive cognitive workloads. “Anaconda is an important capability for developers building cognitive solutions, and now it’s available on IBM’s high performance deep learning platform,” said Bob Picciano, senior vice president of Cognitive Systems. “Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale.”