Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

PASC19 to feature talk on Scalable High Performance Architectures with Embedded Photonics

The PASC19 conference will feature a Public Lecture by Keren Bergman on Flexibly Scalable High Performance Architectures with Embedded Photonics. The event takes place June 12-14 in Zurich, Switzerland the week before ISC 2019. “Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities.”

Call for Submissions: Arm Research Summit in Austin

The Arm Research Summit, has issued its Call for Submissions. The event takes place September 15-18 in Austin, Texas. As a one-of-a-kind forum for topics that are shaping our world, the Summit focuses on presentations and discussions, and welcomes research at all stages of development and/or publication. The committee encourages submissions of early-stage, high-impact ideas seeking feedback, new […]

25 Gigabit Ethernet Consortium Offers Low Latency Specification for 50GbE, 100GbE and 200GbE HPC Networks

Today the 25 Gigabit Ethernet Consortium announced the availability of a low-latency forward error correction (FEC) specification for 50 Gbps, 100 Gbps and 200 Gbps Ethernet networks. “Five years ago, only HPC developers cared about low latency, but today latency sensitivity has come to many more mainstream applications,” said Rob Stone, technical working group chair of the 25G Ethernet Consortium. “With this new specification, the consortium is improving the single largest source of packet processing latency, which improves the performance that high-speed Ethernet brings to these applications.”

HPC + Ai: Machine Learning Models in Scientific Computing

Steve Oberlin from NVIDIA gave this talk at the Stanford HPC Conference. “Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy.”

Seeking Seed Money? Creative Destruction Lab offers Incubator Streams for Quantum, Blockchain, and Ai

The Creative Destruction Lab is now accepting applications for their 2019 Quantum Machine Learning and Blockchain-AI Incubator Streams. As a seed-stage program for massively scalable, science-based companies, the mission of the CDL is to enhance the prosperity of humankind. “The CDL Blockchain-AI Incubator Stream is a 10-month incubator program which gives blockchain founders personalized mentorship from blockchain thought leaders, successful tech entrepreneurs, scientists, economists and venture capitalists. Founders are also eligible for up to US$100K in investment, in exchange for equity.”

Video: Introduction to Intel Optane Data Center Persistent Memory

In this video from the 2019 Stanford HPC Conference, Usha Upadhyayula & Tom Krueger from Intel present: Introduction to Intel Optane Data Center Persistent Memory. For decades, developers had to balance data in memory for performance with data in storage for persistence. The emergence of data-intensive applications in various market segments is stretching the existing […]

Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiBD Project

DK Panda from Ohio State University gave this talk at the 2019 Stanford HPC Conference. “This talk will provide an overview of challenges in designing convergent HPC and BigData software stacks on modern HPC clusters. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Enhanced designs for these components to exploit HPC scheduler (SLURM), parallel file systems (Lustre) and NVM-based in-memory technology will also be presented. Benefits of these designs on various cluster configurations using the publicly available RDMA-enabled packages from the OSU HiBD project will be shown.”

Job of the Week: HPC Systems Administrator at the University of Chicago Center for Research Informatics

The University of Chicago Center for Research Informatics is seeking an HPC Systems Administrator in our Job of the Week. “This position will work with the Lead HPC Systems Administrator to build and maintain the BSD High Performance Computing environment, assist life-sciences researchers to utilize the HPC resources, work with stakeholders and research partners to successfully troubleshoot computational applications, handle customer requests, and respond to suggestions for improvements and enhancements from end-users.”

EuroHPC Takes First Steps Towards Exascale

The European High Performance Computing Joint Undertaking (EuroHPC JU) has launched its first calls for expressions of interest, to select the sites that will host the Joint Undertaking’s first supercomputers (petascale and precursor to exascale machines) in 2020. “Deciding where Europe will host its most powerful petascale and precursor to exascale machines is only the first step in this great European initiative on high performance computing,” said Mariya Gabriel, Commissioner for Digital Economy and Society. “Regardless of where users are located in Europe, these supercomputers will be used in more than 800 scientific and industrial application fields for the benefit of European citizens.”

Exascale Computing Project updates Extreme-Scale Scientific Software Stack

Exascale computing is only a few years away. Today the Exascale Computing Project (ECP) put out the second release of their Extreme-Scale Scientific Software Stack. The E4S Release 0.2 includes a subset of ECP ST software products, and demonstrates the target approach for future delivery of the full ECP ST software stack. Also available are […]