Peter Thompson from Rogue Wave Software presented this talk at the Argonne Training Program on Extreme-Scale Computing. “Purpose-built for applications using hundreds or thousands of cores, TotalView for HPC provides a set of tools that give scientific and academic developers unprecedented control over processes and thread execution, along with deep visibility into program states and data. By allowing the simultaneous debugging of many processes and threads in a single window, you get complete control over program execution: running, stepping, and halting line-by-line through code within a single thread or within arbitrary groups of processes or threads.”
In a paper published today in Nature Geoscience, scientists at the Met Office have demonstrated significant advances in predicting up to one year ahead the phases of the North Atlantic Oscillation (NAO), which drives European and North American winter variability. The NAO – a large-scale gradient in air pressure measured between low pressure around Iceland and high pressure around the Azores – is the primary driver of winter climate variability for Europe.
Over at CSCS, Simone Ulmer writes that the Swiss National Supercomputing Centre is turning twenty-five. First opened in 1991, CSCS supports users from Swiss and international institutions in their top-flight research and runs computers as a service facility for research associations and MeteoSwiss.
In this video from the Microsoft Ignite Conference, Tejas Karmarkar describes how to run your HPC Simulations on Microsoft Azure – with UberCloud container technology. “High performance computing applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking, parallel file systems, GPUs, and Linux. We show you how to run these engineering, research and scientific workloads in Microsoft Azure with performance equivalent to on-premises. We use customer case studies to illustrate the basic architecture and alternatives to help you get started with HPC in Azure.”
“While we often talk about the density advantages of containers, it’s the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host’s CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers — security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we’ll examine a reference architecture and some best practices around containers in HPC environments.”
In this video from the HPC Advisory Council Spain Conference, Addison Snell from Intersect360 Research looks back over the past 10 years of HPC and provides predictions for the next 10 years. Intersect360 Research just released their Worldwide HPC 2015 Total Market Model and 2016–2020 Forecast.
The HPC Advisory Council has posted their agenda for their upcoming China Conference. The event takes place Oct. 26 in Xi’an, China. “We invite you to join us on Wednesday, October 26th, in Xi’an for our annual China Conference. This year’s agenda will focus on Deep learning, Artificial Intelligence, HPC productivity, advanced topics and futures. Join fellow technologists, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High Performance Computing.”
This year at SC16 in Salt Lake City, Dr. Thomas Sterling from Indiana University will present: Runtime Systems Software for Future HPC: Opportunity or Distraction? “As one of the SC16 Invited Talks, this presentation will provide a comprehensive review of driving challenges, strategies, examples of existing runtime systems, and experiences. One important consideration is the possible future role of advances in computer architecture to accelerate the likely mechanisms embodied within typical runtimes. The talk will conclude with suggestions of future paths and work to advance this possible strategy.”
“Today’s most advanced seismic survey datasets encompass many hundreds of terabytes, and gaining insight from this data lies squarely at the convergence of supercomputing and big data,” said Barry Bolding, chief strategy officer at Cray. “The Cray supercomputers allow PGS to quickly process this data into an accurate, clear image of what’s lying underneath the sea floor, through kilometers of varied geology. This is an extraordinarily complex computational challenge, and is where PGS excels. We’re thrilled PGS continues to rely on Cray supercomputers to power the next generation of seismic processing and imaging.”
In this podcast, the Radio Free HPC team looks at the new OpenCAPI interconnect standard. “Released this week by the newly formed OpenCAPI Consortium, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and can significantly improve server performance.”