In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.
In this video from GTC 2016 in Taiwan, Nvidia CEO Jen-Hsun Huang unveils technology that will accelerate the deep learning revolution that is sweeping across industries. “AI computing will let us create machines that can learn and behave as humans do. It’s the reason why we believe this is the beginning of the age of AI.”
In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC’s new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.
In this video, Better Markets CEO Dennis Kelleher discusses the progress of the Consolidated Audit Trail (CAT), a proposed SEC supercomputer that will be used to track orders and peer into dark pools. While this sounds like a good idea, Kelleher describes the conflicts of interest inherent in the proposal process the SEC is using for CAT. Kelleher is the CEO of Better Markets, a non-profit, non-partisan, and independent organization founded in the wake of the 2008 financial crisis to promote the public interest in the financial markets.
“Engineers at Cray noted that the HPC community was hungry for alternative parallel programming languages and developed Chapel as part of our response. The reaction from HPC users so far has been very encouraging—most would be excited to have the opportunity to use Chapel once it becomes production-grade.”
Today the University of Alabama at Birmingham unveiled a new supercomputer powered by Dell. With a peak performance of 110 Teraflops, the system is 10 times faster than its predecessor. “With their new Dell EMC HPC cluster, UAB researchers will have the compute and storage they need to aggressively research, uncover and apply knowledge that changes the lives of individuals and communities in many areas, including genomics and personalized medicine.”
In this video from the 2016 HPC User Forum in Austin, a select panel of HPC vendors describe their disruptive technologies for high performance computing. Vendors include: Altair, SUSE, ARM, AMD, Ryft, Red Hat, Cray, and Hewlett Packard Enterprise. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances.”
Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”
Today the Energy Department’s Advanced Manufacturing Office announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department’s national laboratories to tackle major manufacturing challenges. The High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology in manufacturing, with an aim of increasing energy efficiency, advancing clean energy technology, and reducing energy’s impact on the environment.
Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”