Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Flow Science Partners with FRIENDSHIP SYSTEMS for Optimizing Simulation

“The integration of FLOW-3D with CAESES creates a powerful design environment for our users. FLOW-3D’s inherent ease of modifying geometry is even more potent when combined with an optimization tool like CAESES, which specializes in optimizing for geometry as well as other parametric studies,” said Flow Science Vice President of Sales and Business Development, Amir Isfahani.

DDN Names Robert Triendl SVP Global Sales, Marketing and Field Services

“DDN has a long history of technological innovation, a great team and a phenomenal market opportunity,” said Triendl. “I am excited to help realize what I believe is tremendous potential to extend our unmatched delivery of performance and capacity at scale – far beyond what most can even imagine.”

Slidecast: For AMD, It’s Time to ROCm!

“AMD has been away from the HPC space for a while, but now they are coming back in a big way with an open software approach to GPU computing. The Radeon Open Compute Platform (ROCm) was born from the Boltzmann Initiative announced last year at SC15. Now available on GitHub, the ROCm Platform bringing a rich foundation to advanced computing by better integrating the CPU and GPU to solve real-world problems.”

Nvidia Expands Deep Learning Institute

Over at the Nvidia Blog, Jamie Beckett writes that the company’s is expanding its Deep Learning Institute with Microsoft and Coursera. The institute provides training to help people apply deep learning to solve challenging problems.

Video: Intel Scalable System Framework

Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”

OnDemand 3.0 Portal to Power Owens Supercomputer at OSC

“We’re currently installing the most powerful supercomputer in the history of the center, but it’s just a roomful of hardware without fast and easy access for our clients,” said David Hudak, interim executive director of OSC. “OnDemand 3.0 provides them with seamless, flexible access to all our computer and storage services.”

Podcast: How PyLadies are Increasing Diversity in Coding and Data Science

In this Intel Chip Chat podcast, Dr. Julie Krugler Hollek, co-organizer of PyLadies San Francisco and Data Scientist at Twitter, joins Allyson Klein to discuss efforts to democratize participation in open source communities and the future of data science. “PyLadies helps people who identify as women become participants in open source Python projects like The SciPy Stack, a specification that provides access to machine learning and data visualization tools.”

Nvidia Unveils World’s First GPU Design for Inferencing

Nvidia’s GPU platforms have been widely used on the training side of the Deep Learning equation for some time now. Today the company announced a new Pascal-based GPU tailor-made for the inferencing side of Deep Learning workloads. “With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries,” said Ian Buck, general manager of accelerated computing at NVIDIA.”

Measuring HPC: Performance, Cost, & Value

Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”

EU Funds BSC Researcher to Develop New Aircraft Simulation Tools

BSC researcher Xevi Roca will receive a one of the European Union’s most prestigious research grants for a project to create new simulation methods to respond to the aviation industry’s most pressing challenges. Roca, who has been working on geometry for aeronautical simulation since 2004, is proposing to integrate time as a dimension into the geometries of simulations. The aim is to improve the efficiency, accuracy and robustness of the aerodynamic performance simulations carried out on supercomputers such as BSC’s MareNostrum.