“With the new SPICE system from SGI, we have seen a step-change in performance for our researchers and scientists doing post-processing of weather and climate data,” said Richard Bevan, Head of Operational Technology at the Met Office. “Tasks that used to take 1-2 days to complete are now done in a fraction of that time, allowing scientists to perform multiple runs in one day.”
In a paper published today in Nature Geoscience, scientists at the Met Office have demonstrated significant advances in predicting up to one year ahead the phases of the North Atlantic Oscillation (NAO), which drives European and North American winter variability. The NAO – a large-scale gradient in air pressure measured between low pressure around Iceland and high pressure around the Azores – is the primary driver of winter climate variability for Europe.
Over at CSCS, Simone Ulmer writes that the Swiss National Supercomputing Centre is turning twenty-five. First opened in 1991, CSCS supports users from Swiss and international institutions in their top-flight research and runs computers as a service facility for research associations and MeteoSwiss.
Georgia Tech is taking on the challenge of moving computing past the end of Moore’s Law by standing up a new interdisciplinary research center, which is known as CRNCH. “We knew that at some point physics would come into play. We hit that wall around 2005,” said Tom Conte, inaugural director of CRNCH and professor in Georgia Tech’s schools of Computer Science and Electrical and Computer Engineering.
Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”
Today the PASC17 Conference announced a track focused on Precision Medicine as Special Topic for Emerging Domains. “Precision medicine, also referred to as personalized medicine, is an emerging domain that is adding tremendous value to the study of life sciences and medical treatment. The requirements that it has for rapid – and secure – processing, analysis and management of vast quantities of data in a wide range of different medical environments make precision medicine ideally suited to high performance computing.”
In this video from the HPC Advisory Council Spain Conference, Addison Snell from Intersect360 Research looks back over the past 10 years of HPC and provides predictions for the next 10 years. Intersect360 Research just released their Worldwide HPC 2015 Total Market Model and 2016–2020 Forecast.
The HPC Advisory Council has posted their agenda for their upcoming China Conference. The event takes place Oct. 26 in Xi’an, China. “We invite you to join us on Wednesday, October 26th, in Xi’an for our annual China Conference. This year’s agenda will focus on Deep learning, Artificial Intelligence, HPC productivity, advanced topics and futures. Join fellow technologists, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High Performance Computing.”
“Today’s most advanced seismic survey datasets encompass many hundreds of terabytes, and gaining insight from this data lies squarely at the convergence of supercomputing and big data,” said Barry Bolding, chief strategy officer at Cray. “The Cray supercomputers allow PGS to quickly process this data into an accurate, clear image of what’s lying underneath the sea floor, through kilometers of varied geology. This is an extraordinarily complex computational challenge, and is where PGS excels. We’re thrilled PGS continues to rely on Cray supercomputers to power the next generation of seismic processing and imaging.”
In this podcast, the Radio Free HPC team looks at the new OpenCAPI interconnect standard. “Released this week by the newly formed OpenCAPI Consortium, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design, which puts the compute power closer to the data, removes inefficiencies in traditional system architectures to help eliminate system bottlenecks and can significantly improve server performance.”