Maria Chan from NST presented this talk at Argonne Out Loud. “People eagerly anticipate environmental benefits from advances in clean energy technologies, such as advanced batteries for electric cars and thin-film solar cells. Optimizing these technologies for peak performance requires an atomic-level understanding of the designer materials used to make them. But how is that achieved? Maria Chan will explain how computer modeling is used to investigate and even predict how materials behave and change, and how researchers use this information to help improve the materials’ performance. She will also discuss the open questions, challenges, and future strategies for using computation to advance energy materials.”
Larry Smarr presented this talk as part of NCSA’s 30th Anniversary Celebration. “For the last thirty years, NCSA has played a critical role in bringing computational science and scientific visualization to the national user community. I will embed those three decades in the 50 year period 1975 to 2025, beginning with my solving Einstein’s equations for colliding black holes on the megaFLOPs CDC 6600 and ending with the exascale supercomputer. This 50 years spans a period in which we will have seen a one trillion-fold increase in supercomputer speed.”
In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.
In this video from GTC 2016 in Taiwan, Nvidia CEO Jen-Hsun Huang unveils technology that will accelerate the deep learning revolution that is sweeping across industries. “AI computing will let us create machines that can learn and behave as humans do. It’s the reason why we believe this is the beginning of the age of AI.”
In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC’s new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.
In this video, Better Markets CEO Dennis Kelleher discusses the progress of the Consolidated Audit Trail (CAT), a proposed SEC supercomputer that will be used to track orders and peer into dark pools. While this sounds like a good idea, Kelleher describes the conflicts of interest inherent in the proposal process the SEC is using for CAT. Kelleher is the CEO of Better Markets, a non-profit, non-partisan, and independent organization founded in the wake of the 2008 financial crisis to promote the public interest in the financial markets.
“Engineers at Cray noted that the HPC community was hungry for alternative parallel programming languages and developed Chapel as part of our response. The reaction from HPC users so far has been very encouraging—most would be excited to have the opportunity to use Chapel once it becomes production-grade.”
Today the University of Alabama at Birmingham unveiled a new supercomputer powered by Dell. With a peak performance of 110 Teraflops, the system is 10 times faster than its predecessor. “With their new Dell EMC HPC cluster, UAB researchers will have the compute and storage they need to aggressively research, uncover and apply knowledge that changes the lives of individuals and communities in many areas, including genomics and personalized medicine.”
In this video from the 2016 HPC User Forum in Austin, a select panel of HPC vendors describe their disruptive technologies for high performance computing. Vendors include: Altair, SUSE, ARM, AMD, Ryft, Red Hat, Cray, and Hewlett Packard Enterprise. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances.”
Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”