MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


An Open Letter to the HPC Community

Altair is making a big investment toward uniting the whole HPC community to accelerate the state of the art (and the state of actual production operations) for HPC scheduling. Altair is joining the OpenHPC project with PBS Pro. They are focused on longevity – creating a viable, sustainable community to focus on job scheduling software that can truly bridge the gap in the HPC world.

Spectra Logic Rolls Out World’s Largest Capacity Tape Library

Today Spectra Logic announced the Spectra TFinity ExaScale Edition, the world’s largest and most richly-featured tape storage system. “Since 2008, Spectra Logic has worked with engineers in the NASA Advanced Supercomputing (NAS) Division at NASA’s Ames Research Center, in California’s Silicon Valley, first deploying a Spectra tape library with 22 petabytes of capacity. According to NASA, the Spectra tape library’s capacity has grown to approximately one half an Exabyte of archival storage today. After extensive testing over the past year, NASA recently deployed a Spectra TFinity ExaScale Edition in their 24×7 production HPC environment.”

Podcast: Through the Looking Glass at Quantum Computing

“As a research area, quantum computing is highly competitive, but if you want to buy a quantum computer then D-Wave Systems, founded in 1999, is the only game in town. Quantum computing is as promising as it is unproven. Quantum computing goes beyond Moore’s law since every quantum bit (qubit) doubles the computational power, similar to the famous wheat and chessboard problem. So the payoff is huge, even though it is expensive, unproven, and difficult to program.”

Long Live the King – The Complicated Business of Upgrading Legacy HPC Systems

“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”

In Search Of: A Quantum Leap in Processors

The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.

HPC Matters to Aerospace

In this video from the SC15 HPC Matters series, NASA Aerospace Engineer Dr. Shishir Pandya describes how high performance computing helps advance airplane and rocket technologies. “Why does high-performance computing matter? Because science matters! Discovery matters! Human beings are seekers, questers, questioners. And when we get answers, we ask bigger questions. HPC extends our reach, putting more knowledge, more discovery, and more innovation within our grasp. With HPC, the future is ours to create! HPC Matters!”

Video: Prologue O/S – Improving the Odds of Job Success

“When looking to buy a used car, you kick the tires, make sure the radio works, check underneath for leaks, etc. You should be just as careful when deciding which nodes to use to run job scripts. At the NASA Advanced Supercomputing Facility (NAS), our prologue and epilogue have grown almost into an extension of the O/S to make sure resources that are nominally capable of running jobs are, in fact, able to run the jobs. This presentation describes the issues and solutions used by the NAS for this purpose.”

Evolution of NASA Earth Science Data Systems in the Era of Big Data

Christopher Lynnes from NASA presented this talk at the HPC User Forum. “The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs.”

NASA Charts Sea Level Rise

“Sea level rise is one of the most visible signatures of our changing climate, and rising seas have profound impacts on our nation, our economy and all of humanity,” said Michael Freilich, director of NASA’s Earth Science Division. “By combining space-borne direct measurements of sea level with a host of other measurements from satellites and sensors in the oceans themselves, NASA scientists are not only tracking changes in ocean heights but are also determining the reasons for those changes.”

Video: High-Throughput Processing of Space Debris Data

“Space Debris are defunct objects in space, including old space vehicles (such as satellites or rocket stages) or fragments from collisions. Space debris can cause great damage to functional space ships and satellites. Thus detection of space debris and prediction of their orbital paths are essential for today’s operation of space missions. The talk shows the Python-based infrastructures BACARDI for gathering and storing space debris data from sensors and Skynet for high-throughput data processing and orbital collision detection.”