MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Present and Future Leadership Computers at OLCF


In this video from the 2015 OLCF User Meeting, Buddy Bland from Oak Ridge presents: Present and Future Leadership Computers at OLCF. “As the home of Titan, the fastest supercomputer in the USA, OLCF has an exciting future ahead with the 2017 deployment of the Summit supercomputer. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017.”

GENCI to Collaborate with IBM in Race to Exascale


Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”

China May Develop Two 100 Petaflop Machines Within a Year


“Within the next 12 months, China expects to be operating not one but two 100 Petaflop computers, each containing (different) Chinese-made processors, and both coming on stream about a year before the United States’ 100 Petaflop machines being developed under the Coral initiative. Ironically, the CPU for one machine appears very similar to a technology abandoned by the USA in 2007, and the US Government, through its export embargo, has encouraged China to develop its own accelerator for the other machine.”

Titan Supercomputer Powers the Future of Forecasting


Knowing how the weather will behave in the near future is indispensable for countless human endeavors. Now, researchers at ECMWF are leveraging the computational power of the Titan supercomputer at Oak Ridge to improve weather forecasting.

OpenCL for Performance


“OpenCL is a fairly new programming model that is designed to help programmers get the most out of a variety of processing elements in heterogeneous environments. Many benchmarks that are available have demonstrated that excellent performance can be obtained over a wide variety of devices. Rather than lock an application into one specific accelerator, by using OpenCL, applications can be run over on a number of different architectures with each showing excellent speedups over a native (host cpu) implementation.”

PSC Retires Blacklight Supercomputer to Make Way for Bridges


The big memory “Blacklight” system at the Pittsburgh Supercomputer Center will be retired on Aug 15 to make way for the new “Bridges” supercomputer. “Built by HP, Bridges will feature multiple nodes with as much as 12 terabytes each of shared memory, equivalent to unifying the RAM in 1,536 high-end notebook computers. This will enable it to handle the largest memory-intensive problems in important research areas such as genome sequence assembly, machine learning and cybersecurity.”

Chips Evolve for Data Intensive Niches


“Data centric workloads are growing in importance in high-performance computing and, in an industry that has been dominated by a handful of technologies for several years, this has led users to look for new technologies which better suit such jobs. The demand now is for low power, high memory, and I/O-intensive solutions, so there is a growing niche which can be addressed by solutions which are less focused on Flops performance.”

Users Accelerate their Own Code at EuroHack


“Despite what the name “EuroHack” may lead people to believe, no external systems were hacked during the EuroHack workshop in Lugano. In actual fact, the aim of the event was for experts to design computer codes that would exploit computer architectures more efficiently.”

Video: AMD Firepro S9170 GPU Speeds HPC Applications


“The AMD FirePro S9170 server GPU can accelerate complex workloads in scientific computing, data analytics, or seismic processing, wielding an industry-leading 32GB of memory. We designed the new offering for supercomputers to achieve massive compute performance while maximizing available power budgets.”

High Performance Computing in Defense Intelligence


Technological advancements in hardware and software products allow analysts to process larger amounts of data rapidly, allowing them time to apply human judgment and experience to intelligence problems. This article examines a couple of the hardware advancements in HPC.