Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Stanford HPC Conference Posts Preliminary Agenda

The Stanford HPC Conference has posted it Preliminary Agenda. The two-day event takes place Feb. 20-21 at Stanford University in California. “Join the Stanford High Performance Computing Center, HPC Advisory Council, its members and experts from all over the world for two days of invited and contributed talks and immersive tutorials on topics of great societal impact and responsibility! February’s open forum brings industry luminaries and leading subject matter experts together to examine emerging and major domains and share in-depth insights on AI, Data Sciences, HPC, Machine Learning and more.”

Swiss HPC Conference Returns to Lugano in April with Winter HPCXXL User Group

Today the HPC Advisory Council announced that registration is now open for the Swiss HPC Conference. The event takes place April 9-12 in Lugano, Switzerland. For the first time, the conference will be held in concert with the Winter HPCXXL User Group meeting. “We are very excited to organize a joint conference here in Lugano, bringing together the communities of HPCAC and HPCXXL,” said Hussein Harake, HPC system manager, CSCS. “We believe that such a collaboration will offer a unique opportunity for HPC professionals to discuss and share their knowledge and experiences.”

Industry Analysis: AI and Deep Learning – the Voice of the Market

In this video, Dan Olds from OrionX presents insights from their Q2-Q3 2017 Survey on Artificial Intelligence/Machine Learning/Deep Learning – one of the industry’s most comprehensive AI/ML/DL surveys to date with more than 144 data points. “Dan Olds talks the audience through the demographics and questions, respondents’ understanding of AI/ML/DL, current projects, who is driving AI in organizations, project attributes and more.”

Video: The Era of Data-Centric Data Centers

Gilad Shainer gave this talk at the HPC Advisory Council Spain Conference. “The latest revolution in HPC is the move to a co-design architecture, a collaborative effort among industry, academia, and manufacturers to reach Exascale performance. By taking a holistic system-level approach to fundamental performance improvements Co-design architectures exploit system efficiency and optimizes performance by creating synergies between the hardware and the software.”

Video: HPC Meets Machine Learning

 Andres Gómez Tato from CESGA gave this talk at the HPC Advisory Council Spain Conference. “With the explosion of Deep Learning thanks to the availability of large volume of data, computational resources are needed to train large models, using GPUs and distributed computing. When working on large models, HPC infrastructures can help to speed-up some task during the model design and training. Based on the experience at CESGA and FORTISIMO, this talk reviews the computational needs of Deep Learning, the use cases where HPC can help to Machine Learning, the performance of available Machine Learning APIs and the parallel methods commonly used during ML training.”

Video: Why your school should enter the ISC Student Cluster Competition

In this video, future HPC professionals discuss their participation in the ISC Student Cluster Competition. “Now in its seventh year, the Student Cluster Competition enables international teams to take part in a real-time contest focused on advancing STEM disciplines and HPC skills development. To take home top honors, the teams will have to showcase systems of their own design, adhering to strict power constraints and achieve the highest performance across a series of standard HPC benchmarks and applications.”

Seeking Teams for the 2018 Student Cluster Competition at ISC in Frankfurt

The HPC Advisory Council has officially kicked off the ISC-HPCAC Student Cluster Competition (SCC) with an open call for team entries in the 2018 competition. “The Student Cluster Competition provides a real-world hands-on education that directly benefits students and their individual studies,” noted Gilad Shainer, chairman of the HPC Advisory Council. “Team members gain access to a wealth of industry expertise, training and tools and hands-on exposure to a range of technologies and techniques they’ll use for competition and throughout their careers. By helping advance their knowledge and capabilities, the entire HPC community benefits.”

Video: The State of Bioinformatics in HPC

“In the last few years DNA sequencing technologies have become extremely cheap enabling us to quickly generate terabytes of data for a few thousand dollars. Analysis of this data has become the new bottleneck. Novel compute-intensive streaming approaches that leverage this data without the time-costly step of genome assembly and how UWA’s Edwards group leveraged these approaches to find new breeding targets in crop species are presented.”

Video: DDN Burst Buffer

Justin Glen and Daniel Richards from DDN presented this talk at the HPC Advisory Council Australia Conference. “Burst Buffer was originally created to checkpoint-restart applications and has evolved to help accelerate applications & file systems and make HPC clusters more predictable. This presentation explores regional use cases, recommendations on burst buffer sizing and investment and where it is best positioned in a HPC workflow.”

Beating Floating Point at its own game: Posit Arithmetic

“Dr. Gustafson has recently finished writing a book, The End of Error: Unum Computing, that presents a new approach to computer arithmetic: the unum. The universal number, or unum format, encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This approach obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power.”