Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Scaling Software for In-Memory Computing

“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”

Video: Tracing Ocean Salinity for Global Climate Models

In this visualization, ocean temperatures and salinity are tracked over the course of a year. Based on data from global climate models, these visualizations aid our understanding of the physical processes that create the Earth’s climate, and inform predictions about future changes in climate. “The water’s saltiness, or salinity, plays a significant role in this ocean heat engine, Harrison said. Salt makes the water denser, helping it to sink. As the atmosphere warms due to global climate change, melting ice sheets have the potential to release tremendous amounts of fresh water into the oceans.”

Dell EMC Powers Summit Supercomputer at CU Boulder

“The University of Colorado, Boulder supports researchers’ large-scale computational needs with their newly optimized high performance computing system, Summit. Summit is designed with advanced computation, network, and storage architectures to deliver accelerated results for a large range of HPC and big data applications. Summit is built on Dell EMC PowerEdge Servers, Intel Omni-Path Architecture Fabric and Intel Xeon Phi Knights Landing processors.”

A Decade of Multicore Parallelism with Intel TBB

While HPC developers worry about squeezing out the ultimate performance while running an application on dedicated cores, Intel TBB tackles a problem that HPC users never worry about: How can you make parallelism work well when you share the cores that you run upon?” This is more of a concern if you’re running that application on a many-core laptop or workstation than a dedicated supercomputer because who knows what will also be running on those shared cores. Intel Threaded Building Blocks reduce the delays from other applications by utilizing a revolutionary task-stealing scheduler. This is the real magic of TBB.

Exascale Computing: A Race to the Future of HPC

In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.

Understanding Cities through Computation, Data Analytics, and Measurement

“For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling.”

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

Podcast: Engineering Practical Machine Learning Systems

In This Week in Machine Learning podcast, Xavier Amatriain from Quora discusses the process of engineering practical machine learning systems. Amatriainis a former machine learning researcher who went on to lead the recommender systems team at Netflix, and is now the vice president of engineering at Quora, the Q&A site. “What the heck is a multi-arm bandit and how can it help us.”

Video: Diversity and Inclusion in Supercomputing

Dr. Maria Klawe gave this Invited Talk at SC16. “Like many other computing research areas, women and other minority groups are significantly under-represented in supercomputing. This talk discusses successful strategies for significantly increasing the number of women and students of color majoring in computer science and explores how these strategies might be applied to supercomputing.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”