Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Call for Exhibitors: PASC17 in Lugano

Industry and academic institutions are invited to showcase their R&D at PASC17, an interdisciplinary event in high performance computing that brings together domain science, applied mathematics and computer science. The event takes place June 26-28 in Lugano, Switzerland. “The PASC17 Conference offers a unique opportunity for your organization to gain visibility at a national and international level, to showcase your R&D and to network with leaders in the fields of HPC simulation and data science. PASC17 builds on a successful history – with 350 attendees in 2016 – and continues to expand its program and international profile year on year.”

Addison Snell Presents: HPC Computing Trends

Addison Snell presented this deck at the Stanford HPC Conference. “Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

Earlham Institute Moves HPC Workloads to Iceland

In this video, Dr Tim Stitt from the Earlham Institute describes why moving their HPC workload to Iceland made economic sense. Through the Verne Global datacenter, the Earlham Institute will have access to one of the world’s most reliable power grids producing 100% geothermal and hydro-electric renewable energy. As EI’s HPC analysis requirements continue to grow, Verne Global will enable the institute to save up to 70% in energy costs (based on 14p to 4p KWH rate and with no additional power for cooling, significantly benefiting the organization in their advanced genomics and bioinformatics research of living systems.

Asperitas Startup Brings Immersive Cooling to Datacenters

Today Dutch startup Asperitas rolled out Immersed Computing cooling technology for datacenters. “The company’s first market ready solution, the AIC24, ‘the first water-cooled oil-immersion system which relies on natural convection for circulation of the dielectric liquid.’ This results in a fully self-contained and Plug and Play modular system. The AIC24 needs far less infrastructure than any other liquid installation, saving energy and costs on all levels of datacentre operations. The AIC24 is the most sustainable solution available for IT environments today. Ensuring the highest possible efficiency in availability, energy reduction and reuse, while increasing capacity. Greatly improving density, while saving energy at the same time.”

Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech

“TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5,” writes Marc Hamilton from Nvidia. “It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.”

Video: The Era of Self-Tuning Servers

“Servers today have hundreds of knobs that can be tuned for performance and energy efficiency. While some of these knobs can have a dramatic effect on these metrics, manually tuning them is a tedious task. It is very labor intensive, it requires a lot of expertise, and the tuned settings are only relevant for the hardware and software that were used in the tuning process. In addition to that, manual tuning can’t take advantage of application phases that may each require different settings. In this presentation, we will talk about the concept of dynamic tuning and its advantages. We will also demo how to improve performance using manual tuning as well as dynamic tuning using DatArcs Optimizer.”

Exxon Mobil and NCSA Achieve New Levels of Scalability on complex Oil & Gas Reservoir Simulation Models

“This breakthrough has unlocked new potential for ExxonMobil’s geoscientists and engineers to make more informed and timely decisions on the development and management of oil and gas reservoirs,” said Tom Schuessler, president of ExxonMobil Upstream Research Company. “As our industry looks for cost-effective and environmentally responsible ways to find and develop oil and gas fields, we rely on this type of technology to model the complex processes that govern the flow of oil, water and gas in various reservoirs.”

Deep Learning & HPC: New Challenges for Large Scale Computing

“In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload.”

ISC 2017 Distinguished Talks to Focus on Data Analytics in Manufacturing & Science

Today ISC 2017 announced that it’s Distinguished Talk series will focus on Data Analytics in manufacturing and scientific applications. One of the Distinguished Talks will be given by Dr. Sabine Jeschke from the Cybernetics Lab at the RWTH Aachen University on the topic of, “Robots in Crowds – Robots and Clouds.” Jeschke’s presentation will be followed by one from physicist Kerstin Tackmann, from the German Electron Synchrotron (DESY) research center, who will discuss big data and machine learning techniques used for the ATLAS experiment at the Large Hadron Collider.

Six Steps Towards Better Performance on Intel Xeon Phi

“As with all new technology, developers will have to create processes in order to modernize applications to take advantage of any new feature. Rather than randomly trying to improve the performance of an application, it is wise to be very familiar with the application and use available tools to understand bottlenecks and look for areas of improvement.”