Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

Barry Bolding from Cray Shares Four Predictions for HPC in 2017

In this special guest feature from Scientific Computing World, Cray’s Barry Bolding gives some predictions for the supercomputing industry in 2017. “2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we’ll see processors becoming more and more power hungry.”

Supercomputing Sheds Light on Leaf Study

A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. “More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models,” Keenan said.

Video: How an Exoplanet Makes Waves

In this video, a new NASA supercomputer simulation depicts the planet and debris disk around the nearby star Beta Pictoris reveals that the planet’s motion drives spiral waves throughout the disk, a phenomenon that greatly increases collisions among the orbiting debris. Patterns in the collisions and the resulting dust appear to account for many observed features that previous research has been unable to fully explain.

New Site Compares Docker, Singularity, Shifter, and Univa Grid Engine Container Edition

A new site developed by Tin H compares the HPC virtualization capabilities of Docker, Singularity, Shifter, and Univa Grid Engine Container Edition. “They bring the benefits of container to the HPC world and some provide very similar features. The subtleties are in their implementation approach. MPI maybe the place with the biggest difference.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”

Experts Weigh in on 2017 Artificial Intelligence Predictions

In this presentation from Nvidia, top AI experts from the world’s most influential companies weigh in on predicted advances for AI in 2017. “In 2017, intelligence will trump speed. Over the last several decades, nations have competed on speed, intent to build the world’s fastest supercomputer,” said Ian Buck, VP of Accelerated computing at Nvidia. “In 2017, the race will shift. Nations of the world will compete on who has the smartest supercomputer, not solely the fastest.”

Submission Deadlines for ISC 2017 are Fast Approaching

Are you planning for ISC 2017? The deadlines for submissions are fast approaching. The conference takes place June 18 – 22, 2017 in Frankfurt, Germany. “Participation in these sessions and programs will help enrich your experience at the conference, not to mention provide exposure for your work to some of the most discerning HPC practitioners and business people in the industry. We also want to remind you that it’s the active participation of the community that helps make ISC High Performance such a worthwhile event for all involved.”

AMD Unveils Vega GPU Architecure with HBM Memory

Today AMD unveiled preliminary details of its forthcoming GPU architecture, Vega. Conceived and executed over 5 years, Vega architecture enables new possibilities in PC gaming, professional design and machine intelligence that traditional GPU architectures have not been able to address effectively. “It is incredible to see GPUs being used to solve gigabyte-scale data problems in gaming to exabyte-scale data problems in machine intelligence. We designed the Vega architecture to build on this ability, with the flexibility to address the extraordinary breadth of problems GPUs will be solving not only today but also five years from now. Our high-bandwidth cache is a pivotal disruption that has the potential to impact the whole GPU market,” said Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD.

Dell EMC Powers HPC at University of Connecticut

The University of Connecticut has partnered with Dell EMC and Intel to create a high performance computing cluster that students and faculty can use in their research. With this HPC Cluster, UConn researchers can solve problems that are computationally intensive or involve massive amounts of data in a matter of days or hours, instead of weeks. The HPC cluster operated on the Storrs campus features 6,000 CPU cores, a high-speed fabric interconnect, and a parallel file system. Since 2011, it has been used by over 500 researchers, from each of the university’s schools and colleges, for over 40 million hours of scientific computation.