MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Radio Free HPC Previews Tutorials at SC15


In this podcast, the Radio Free HPC team previews three of the excellent Tutorial sessions coming up at SC15. “The SC tutorials program is one of the highlights of the SC Conference series, and it is one of the largest tutorial programs at any computing-related conference in the world. It offers attendees the chance to learn from and to interact with leading experts in the most popular areas of high performance computing (HPC), networking, and storage.”

New Intel® Omni-Path White Paper Details Technology Improvements

Rob Farber

The Intel Omni-Path Architecture (Intel® OPA) whitepaper goes through the multitude of improvements that Intel OPA technology provides to the HPC community. In particular, HPC readers will appreciate how collective operations can be optimized based on message size, collective communicator size and topology using the point-to-point send and receive primitives.

The Price of Open-source Software – a Joint Response


Should all academic software be released as open source by default? “Ultimately, we must accept that research is best served through using a combination of open-source and proprietary software, through developing new software and through the use of existing software. This approach allows the research community to focus on what is optimal for scientific discovery: the one point on which everyone in this debate agrees.”

From Grand Challenges to Critical Workflows


Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. “For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main” application.”

Experts Focus on Code Efficiency at ISC 2015

DK Panda from Ohio State University conducts a tutorial at ISC 2015.

In this special guest feature, Robert Roe from Scientific Computing World explores the efforts made by top HPC centers to scale software codes to the extreme levels necessary for exascale computing. “The speed with which supercomputers process useful applications is more important than rankings on the TOP500, experts told the ISC High Performance Conference in Frankfurt last month.”

Interview: An Exhibits Preview of SC15


With Summer winding down, SC15 is just around the corner. With a smaller exhibits space than previous years, the SC15 Exhibits Chair Trey Breckenridge was faced with a number of challenges going into this year’s Supercomputing conference. In this interview from the SC15 Blog, Breckenridge gives us a preview of what looks to be another great exhibition.

Research Demands More Compute Power and Faster Storage for Complex Computational Applications


Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.

Innovation Keeps Supercomputers Cool


“The range of cooling options now available is testimony to engineering ingenuity. HPC centers can choose between air, oil, dielectric fluid, or water as the heat-transfer medium. Opting for something other than air means that single or two-phase flow could be available, opening up the possibilities of convective or evaporative cooling and thus saving the cost of pumping the fluid round the system.”

Training the Next Generation of Code Developers for HPC – Part 2

Rob Farber gives a tutorial at SC14

This is the second article in a two-part series about the challenges facing the HPC community in training people to write code and develop algorithms for current and future, massively-parallel, massive-scale HPC systems.

Maximizing Benefits of HPC with the National Strategic Computing Initiative

Jorge Titinger, SGI

“What we’re seeing in President Obama’s Executive Order is a major proof point of the importance of high-end computer technology in bolstering and redefining national competitiveness. In the past, a country’s competitiveness and global power was defined by economic growth and defense capabilities. But now we’re seeing the advent of actionable technological insight—especially derived from the power of big data—becoming a factor of a country’s power.”