Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Overview of Scientific Workflows

Scott Callaghan from the Southern California Earthquake Center presented this talk as part of the Blue Waters Webinar Series. “I will present an overview of scientific workflows. I’ll discuss what the community means by “workflows” and what elements make up a workflow. We’ll talk about common problems that users might be facing, such as automation, job management, data staging, resource provisioning, and provenance tracking, and explain how workflow tools can help address these challenges. I’ll present a brief example from my own work with a series of seismic codes showing how using workflow tools can improve scientific applications.”

Introduction to Data Science with Spark

The Data Science with Spark Workshop addresses high-level parallelization for data analytics workloads using the Apache Spark framework. Participants will learn how to prototype with Spark and how to exploit large HPC machines like the Piz Daint CSCS flagship system.

New Paper Surveys Cache Partitioning Techniques

A new paper from IIT Hyderabad in India surveys cache partitioning techniques for multicore processors. Now accepted in ACM Computing Surveys 2017, the survey by Sparsh Mittal reviews 90 papers. “As the number of on-chip cores and memory demands of applications increase, judicious management of cache resources has become imperative. Cache partitioning is a promising approach to provide capacity benefits of shared cache with performance isolation of private caches. This paper reviews various cache partitioning techniques, e.g., strict/psuedo, static/dynamic, hardware/software-based, block/set/way-based, for improving performance/fairness/load-balancing/QoS, etc.”

Interview: XTREME DESIGN Automates HPC Cloud Configurations

Tokyo-based Startup XTREME DESIGN recently announced it has raised $700K of funding in its pre-series A round. Launched in early 2015, the Startup’s XTREME DNA software automates the process of configuring, deploying, and monitoring virtual supercomputers on public clouds. To learn more, we caught up with the company’s founder, Naoki Shibata.

Moving to Exascale – Closer Than We Think?

“Back in 2013 I wrote the following blog expressing my opinion that I doubted we would reach Exascale before 2020. However, recently it was announced that the world’s first Exascale supercomputer prototype will be ready by the end of 2017 (recently pushed back to early 2018), created by the Chinese. I did some digging and wanted to share my thoughts on the news.”

E4 Computer Engineering’s Showcases New Petascale OCP Platform

Today E4 Computer Engineering from Italy showcased a new PetaFlops-Class Open Compute Server with “remarkable energy efficiency” based on the IBM POWER architecture. “Finding new ways of making easily deployable and energy efficient HPC solutions is often a complex task, which requires a lot of planning, testing and benchmarking – said Cosimo Gianfreda CTO, Co-Founder, E4 Computer Engineering. – We are very lucky to work with great partners like Wistron, as their timing and accuracy means we have all the right conditions to have effective time-to-market. I strongly believe that the performance on the node, coupled with the power monitoring technology, will receive a wide acceptance from the HPC and Enterprise community.”

Atos Bullion Breaks Record for SPEC Performance

Today Atos announced record SPEC benchmark performance on its bullion x86 servers. Performed with a 16-socket configuration, this benchmark demonstrates that the high-end enterprise bullion x86 servers perform at exceptional levels and thus the most powerful in the world in terms of speed and memory.

Compressing Software Development Cycles with Supercomputer-based Spark

“Do you need to compress your software development cycles for services deployed at scale and accelerate your data-driven insights? Are you delivering solutions that automate decision making & model complexity using analytics and machine learning on Spark? Find out how a pre-integrated analytics platform that’s tuned for memory-intensive workloads and powered by the industry leading interconnect will empower your data science and software development teams to deliver amazing results for your business. Learn how Cray’s supercomputing approach in an enterprise package can help you excel at scale.”

SDSC Seismic Simulation Software Exceeds 10 Petaflops on Cori Supercomputer

Researchers at SDSC have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date. SDSC’s ground-breaking performance of 10.4 Petaflops on earthquake simulations used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at NERSC.

IBM to Build Commercially Available Quantum Computing Systems

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”