Nikos Trikoupis from the City University of New York gave this talk at the HPC User Forum in Austin. “We focus on measuring the aggregate throughput delivered by 12 Intel SSD DC P3700 for NVMe cards installed on the SGI UV 300 scale-up system in the CUNY High Performance Computing Center. We establish a performance baseline for a single SSD. The 12 SSDs are assembled into a single RAID-0 volume using Linux Software RAID and the XVM Volume Manager. The aggregate read and write throughput is measured against different configurations that include the XFS and the GPFS file systems.”
“Starting in 2015, Oak Ridge National Laboratory partnered with the University of Tennessee to offer a minor-degree program in data center technology and management, one of the first offerings of its kind in the country. ORNL staff members developed the senior-level course in collaboration with UT College of Engineering professor Mark Dean after an ORNL strategic partner identified a need for employees who could bridge both the facilities and operational aspects of running a data center. In addition to developing the course curriculum, ORNL staff members are also serving as guest lecturers.”
In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC’s new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.
In this video from the 2016 HPC User Forum in Austin, a select panel of HPC vendors describe their disruptive technologies for high performance computing. Vendors include: Altair, SUSE, ARM, AMD, Ryft, Red Hat, Cray, and Hewlett Packard Enterprise. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances.”
Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”
Today the Energy Department’s Advanced Manufacturing Office announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department’s national laboratories to tackle major manufacturing challenges. The High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology in manufacturing, with an aim of increasing energy efficiency, advancing clean energy technology, and reducing energy’s impact on the environment.
Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”
Leonardo Flores from the European Commission presented this talk at the HPC User Forum. “The Cloud Initiative will make it easier for researchers, businesses and public services to fully exploit the benefits of Big Data by making it possible to move, share and re-use data seamlessly across global markets and borders, and among institutions and research disciplines. Making research data openly available can help boost Europe’s competitiveness, especially for start-ups, SMEs and companies who can use data as a basis for R&D and innovation, and can even spur new industries.”
Yutaka Ishikawa from Riken AICS presented this talk at the HPC User Forum. “Slated for delivery sometime around 2022, the ARM-based Post-K Computer has a performance target of being 100 times faster than the original K computer within a power envelope that will only be 3-4 times that of its predecessor. RIKEN AICS has been appointed as the main organization for leading the development of the Post-K.”
“Nimbis was founded in 2008 by HPC industry veterans Robert Graybill and Brian Schott to act as the first nationwide brokerage clearinghouse for a broad spectrum of integrated cloud-based HPC platforms and applications. Our fully integrated online Technical Computing Marketplace comprises several stores hosting modeling and simulation applications on HPC platforms in the cloud.”