Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPE Accelerates Machine Learning Operationalization

Today HPE announced a container-based software solution, HPE ML Ops, to support the entire machine learning model lifecycle for on-premises, public cloud and hybrid cloud environments. “HPE is closing this gap by addressing the entire ML lifecycle with its container-based, platform-agnostic offering – to support a range of ML operational requirements, accelerate the overall time to insights, and drive superior business outcomes.”

Insilico Medicine Brings GENTRL AI System to Open Source for Drug Discovery

Insilico Medicine has developed GENTRL, a new artificial intelligence system for drug discovery that dramatically accelerates the process from years to days. “By enabling the rapid discovery of novel molecules and by making GENTRL’s source code open source, we are ushering in new possibilities for the creation and discovery of new life-saving medicine for incurable diseases — and making such powerful technology more broadly accessible for the first time to the public.”

CSC Finland powers AI to help predict hybrid nanoparticle structures

Researchers in Finland have achieved a significant step forward in predicting atomic structures of hybrid nanoparticles. The work was carried out using supercomputing resources at CSC and the Barcelona supercomputing center, as a part of a PRACE project. “This is a significant step forward within the context of new interdisciplinary collaboration in our university. Applying artificial intelligence to challenging topics in nanoscience, such as structural predictions for new nanomaterials, will surely lead to new breakthroughs.”

The ABCI Supercomputer: World’s First Open AI Computing Infrastructure

Shinichiro Takizawa from AIST gave this talk at the MVAPICH User Group. “ABCI is the world’s first large-scale Open AI Computing Infrastructure, constructed and operated by AIST, Japan. It delivers 19.9 petaflops of HPL performance and world’ fastest training time of 1.17 minutes in ResNet-50 training on ImageNet datasets as of July 2019. In this talk, we focus on ABCI’s network architecture and communication libraries available on ABCI and shows their performance and recent research achievements.”

The Confluence of HPC and AI – Intel Customer Use Cases

Vikram Saletore from Intel gave this talk at the MVAPICH User Group. “Intel collaborates with customers and partners worldwide to build, accelerate, scale and deploy their AI applications on Intel based HPC platforms. We share with you our insights on several customer AI use cases we have enabled, the orders of magnitude performance acceleration we have delivered via popular open-source software framework optimizations, and the best-known methods to advance the convergence of AI and HPC on Intel Xeon Scalable Processor based servers. We will also demonstrate how large memory systems help real world AI applications efficiently.”

Interview: Knowledgebase is power for nuclear reactor developers

AI technologies are being used to help develop Next-gen nuclear energy systems that could help reduce our dependency on fossil fuels. In this special guest feature, Dawn Levy and Weiju Ren from ORNL explore the challenges and opportunities in sharing nuclear materials knowledge internationally. “A knowledgebase is more than a database. Data are just symbols representing observations or the products of observations. Knowledge is not only data, but also people’s understanding of the data.”

Podcast: HPC & AI Convergence Enables AI Workload Innovation

In this Conversations in the Cloud podcast, Esther Baldwin from Intel describes how the convergence of HPC and AI is driving innovation. “On the topic of HPC & AI converged clusters, there’s a perception that if you want to do AI, you must stand up a separate cluster, which Esther notes is not true. Existing HPC customers can do AI on their existing infrastructure with solutions like HPC & AI converged clusters.”

Containerized Convergence of Big Data and Big Compute

Christian Kniep gave this talk at HPCKP’19. “This talk will dissect the convergence by refreshing the audiences’ memory on what containerization is about, segueing into why AI/ML workloads are leading to fully fledged HPC applications eventually and how this will inform the way forward. In conclusion Christian will discuss the three main challenges `Hardware Access`, `Data Access` and `Distributed Computing` in container technology and how they can be tackled by the power of open source, while focusing on the first.”

Exascale CANDLE Project to Fight Against Cancer

The CANcer Distributed Learning Environment, or CANDLE, is a cross-cutting initiative of the Joint Design of Advanced Computing Solutions for Cancer collaboration and is supported by DOE’s Exascale Computing Project (ECP). CANDLE is building a scalable deep learning environment to run on DOE’s most powerful supercomputers. The goal is to have an easy-to-use environment that can take advantage of the full power of these systems to find the optimal deep-learning models for making predictions in cancer.

Intel Talks at Hot Chips gear up for “AI Everywhere”

Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. InsideHPC has got all the details, here, all in one place.