Scott Callaghan from the Southern California Earthquake Center presented this talk as part of the Blue Waters Webinar Series. “I will present an overview of scientific workflows. I’ll discuss what the community means by “workflows” and what elements make up a workflow. We’ll talk about common problems that users might be facing, such as automation, job management, data staging, resource provisioning, and provenance tracking, and explain how workflow tools can help address these challenges. I’ll present a brief example from my own work with a series of seismic codes showing how using workflow tools can improve scientific applications.”
The Data Science with Spark Workshop addresses high-level parallelization for data analytics workloads using the Apache Spark framework. Participants will learn how to prototype with Spark and how to exploit large HPC machines like the Piz Daint CSCS flagship system.
A new paper from IIT Hyderabad in India surveys cache partitioning techniques for multicore processors. Now accepted in ACM Computing Surveys 2017, the survey by Sparsh Mittal reviews 90 papers. “As the number of on-chip cores and memory demands of applications increase, judicious management of cache resources has become imperative. Cache partitioning is a promising approach to provide capacity benefits of shared cache with performance isolation of private caches. This paper reviews various cache partitioning techniques, e.g., strict/psuedo, static/dynamic, hardware/software-based, block/set/way-based, for improving performance/fairness/load-balancing/QoS, etc.”
Tokyo-based Startup XTREME DESIGN recently announced it has raised $700K of funding in its pre-series A round. Launched in early 2015, the Startup’s XTREME DNA software automates the process of configuring, deploying, and monitoring virtual supercomputers on public clouds. To learn more, we caught up with the company’s founder, Naoki Shibata.
“Back in 2013 I wrote the following blog expressing my opinion that I doubted we would reach Exascale before 2020. However, recently it was announced that the world’s first Exascale supercomputer prototype will be ready by the end of 2017 (recently pushed back to early 2018), created by the Chinese. I did some digging and wanted to share my thoughts on the news.”
Today Atos announced record SPEC benchmark performance on its bullion x86 servers. Performed with a 16-socket configuration, this benchmark demonstrates that the high-end enterprise bullion x86 servers perform at exceptional levels and thus the most powerful in the world in terms of speed and memory.
“Do you need to compress your software development cycles for services deployed at scale and accelerate your data-driven insights? Are you delivering solutions that automate decision making & model complexity using analytics and machine learning on Spark? Find out how a pre-integrated analytics platform that’s tuned for memory-intensive workloads and powered by the industry leading interconnect will empower your data science and software development teams to deliver amazing results for your business. Learn how Cray’s supercomputing approach in an enterprise package can help you excel at scale.”
Researchers at SDSC have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date. SDSC’s ground-breaking performance of 10.4 Petaflops on earthquake simulations used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at NERSC.
“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”