Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Call for Submissions: Arm Research Summit in Austin

The Arm Research Summit, has issued its Call for Submissions. The event takes place September 15-18 in Austin, Texas. As a one-of-a-kind forum for topics that are shaping our world, the Summit focuses on presentations and discussions, and welcomes research at all stages of development and/or publication. The committee encourages submissions of early-stage, high-impact ideas seeking feedback, new […]

HPC + Ai: Machine Learning Models in Scientific Computing

Steve Oberlin from NVIDIA gave this talk at the Stanford HPC Conference. “Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy.”

Seeking Seed Money? Creative Destruction Lab offers Incubator Streams for Quantum, Blockchain, and Ai

The Creative Destruction Lab is now accepting applications for their 2019 Quantum Machine Learning and Blockchain-AI Incubator Streams. As a seed-stage program for massively scalable, science-based companies, the mission of the CDL is to enhance the prosperity of humankind. “The CDL Blockchain-AI Incubator Stream is a 10-month incubator program which gives blockchain founders personalized mentorship from blockchain thought leaders, successful tech entrepreneurs, scientists, economists and venture capitalists. Founders are also eligible for up to US$100K in investment, in exchange for equity.”

Video: Introduction to Intel Optane Data Center Persistent Memory

In this video from the 2019 Stanford HPC Conference, Usha Upadhyayula & Tom Krueger from Intel present: Introduction to Intel Optane Data Center Persistent Memory. For decades, developers had to balance data in memory for performance with data in storage for persistence. The emergence of data-intensive applications in various market segments is stretching the existing […]

Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiBD Project

DK Panda from Ohio State University gave this talk at the 2019 Stanford HPC Conference. “This talk will provide an overview of challenges in designing convergent HPC and BigData software stacks on modern HPC clusters. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Enhanced designs for these components to exploit HPC scheduler (SLURM), parallel file systems (Lustre) and NVM-based in-memory technology will also be presented. Benefits of these designs on various cluster configurations using the publicly available RDMA-enabled packages from the OSU HiBD project will be shown.”

Job of the Week: HPC Systems Administrator at the University of Chicago Center for Research Informatics

The University of Chicago Center for Research Informatics is seeking an HPC Systems Administrator in our Job of the Week. “This position will work with the Lead HPC Systems Administrator to build and maintain the BSD High Performance Computing environment, assist life-sciences researchers to utilize the HPC resources, work with stakeholders and research partners to successfully troubleshoot computational applications, handle customer requests, and respond to suggestions for improvements and enhancements from end-users.”

Exascale Computing Project updates Extreme-Scale Scientific Software Stack

Exascale computing is only a few years away. Today the Exascale Computing Project (ECP) put out the second release of their Extreme-Scale Scientific Software Stack. The E4S Release 0.2 includes a subset of ECP ST software products, and demonstrates the target approach for future delivery of the full ECP ST software stack. Also available are […]

XTREME-Stargate: The New Era of HPC Cloud Platforms

In this video from the 2019 Stanford HPC Conference, Naoki Shibata from XTREME-D presents: XTREME-Stargate: The New Era of HPC Cloud Platforms. “XTREME-D is an award-winning, funded Japanese startup whose mission is to make HPC cloud computing access easy, fast, efficient, and economical for every customer. The company recently introduced XTREME-Stargate, which was developed as a cloud-based bare-metal appliance specifically for high-performance computations, optimized for AI data analysis and conventional supercomputer usage.”

Interview: Why HPC is the Right Tool for Physics

Over at the SC19 Blog, Charity Plata continues the HPC is Now series of interviews with Enrico Rinaldi, a physicist and special postdoctoral fellow with the Riken BNL Research Center. This month, Rinaldi discusses why HPC is the right tool for physics and shares the best formula for garnering a Gordon Bell Award nomination. “Sierra and Summit are incredible machines, and we were lucky to be among the first teams to use them to produce new scientific results. The impact on my lattice QCD research was tremendous, as demonstrated by the Gordon Bell paper submission.”

Custom Atos Supercomputer to Speed Genome Analysis at CNAG-CRG in Barcelona

Atos will soon deploy a Bull supercomputer at the Centro Nacional de Análisis Genómico (CNAG-CRG) in Barcelona for large-scale DNA sequencing and analysis. To support the vast process and calculation demands needed for this analysis, CNAG-CRG worked with Atos to build this custom-made analytics platform, which helps drive new insights ten times faster than its previous HPC system. “Atos helped us to set up a robust platform to conduct in-depth high-performance data analytics on genome sequences, which is the perfect complement to our outstanding sequencing platform”, stated Ivo Gut, CNAG-CRG Director.