“When it comes to commercialization of promising IP, HPC punches below its weight. That, we can and should change. Where does the HPC community get training on entrepreneurship? How do you become an entrepreneur? Does it have to be in your blood, or can you actually learn how to do it? It turns out you can learn most of it, and in the process (since nobody is excellent at everything), you also learn how to surround yourself with others who are good at other necessary things.”
“From image recognition in social media to self-driving cars and medical image processing, deep learning is everywhere in our daily lives. Learn about recent advancements in deep learning that have been made possible by improvements in algorithms, numerical methods, and the availability of large amounts of data for training, as well as accelerated computing solutions based on GPUs. With GPUs, great performance can be reached across a wide range of platforms, from model development on a workstation to training on HPC and data-center systems to embedded platforms, enabling new horizons for computing and AI applications.”
“Unified Communication X (UCX) is a set of network APIs and their implementations for high performance computing. UCX comes from the combined efforts of national laboratories, industry, and academia to co-design and implement a high-performing and highly scalable communication APIs for next generation applications and systems. UCX solves the problem of moving data memory location “A” to memory location “B” considering across multiple type of memories (DRAM, accelerator memories, etc.) and multiple transports (e.g. InfiniBand, uGNI, Shared Memory, CUDA, etc. ), while minimizing latency, and maximizing bandwidth and message rate.”
“Many organizations are gaining a competitive advantage by implementing a Dynamic Data Center strategy. In a dynamic data center compute resources may be dynamically created and/or provisioned based on workload demand in accordance with configured policies. Compute resources may be physical, on-premises nodes, or they may be virtual nodes in a public or a private cloud, or all of the above. In all cases, the resources are dynamically created and/or powered on and provisioned on-the-fly for a specific workload. This results in an agile data center that responds quickly and automatically to changes in workload demand, while reducing power costs.”
“This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented. We will discuss challenges in designing runtime environments for these programming models by taking into account support for multi-core, high-performance networks, GPGPUs, Intel MIC, scalable collectives (multi-core-aware, topology-aware, and power-aware), non-blocking collectives using Offload framework, one-sided RMA operations, schemes and architectures for fault-tolerance/fault-resilience.”
There is still significant influence between HPC and hyperscale, in both directions, most notably in the areas of cognitive computing and artificial intelligence, where research at some of the top hyperscale companies leads the field. Standards like Open Compute Project, OpenStack, and Beiji/Scorpio also can drive acquisition decisions at traditional HPC-using organizations. Big data and analytics also transcend both HPC and hyperscale, driving I/O scalability in both markets. These trends are all included in the new hyperscale advisory service from Intersect360 Research.
The HPC Advisory Council Stanford Conference 2016 has posted its speaker agenda. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”
In this video from SC15, Brian Sparks from Mellanox presents an overview of the HPC Advisory Council. “The HPC Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) use and its potential, bring the beneficial capabilities of HPC to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC system products.”
The HPC Advisory Council Stanford Conference 2016 has issued its Call for Participation. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”
CORAL (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Labs) is a project that was launched in 2013 to develop the technology and meet the Department of Energy’s 2017-2018 leadership computing needs with supercomputers. The collaboration between Mellanox, IBM and NVIDIA was selected by the CORAL project team after a comprehensive evaluation of future technologies from a variety of vendors. The development of these supercomputers is well underway with installation expected in 2017.