Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan

Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”

Video: Scientel Runs Record Breaking Calculation on Owens Cluster at OSC

In this video, Norman Kutemperor from Scientel describes how his company ran a record-setting big data problem on the Owens supcomputer at OSC.

“The Ohio Supercomputer Center recently displayed the power of its new Owens Cluster by running the single-largest scale calculation in the Center’s history. Scientel IT Corp used 16,800 cores of the Owens Cluster on May 24 to test database software optimized to run on supercomputer systems. The seamless run created 1.25 Terabytes of synthetic data.”

Exploring Evolutionary Relationships through CIPRES

Researchers are exploring the Tree of Life with the help of the CIPRES portal at the San Diego Supercomputer Center. “As a community-built resource, CIPRES addresses what the scientists really want and need to do in the real world of research,” said Mishler. “Aside from increasing our understanding of the evolutionary relationships of this planet’s diverse range of species, the research also has yielded results of critical importance to the health and welfare of humans.”

Dr. Marius Stan Presents: Uncertainty of Thermodynamic Data – Humans and Machines

Marius Stan from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. Famous for his part-time acting role on the Breaking Bad TV show, Marius Stan is a physicist and a chemist interested in non-equilibrium thermodynamics, heterogeneity, and multi-scale computational science for energy applications. The goal of his research is to discover or design materials, structures, and device architectures for nuclear energy and energy storage.

Call For Research Papers: ISC 2018

ISC 2018 has issued their Call for Research Papers. “Submissions are now open for the ISC 2018 conference research paper sessions, which aim to provide first-class opportunities for engineers and scientists in academia, industry, and government to present and discuss issues, trends, and results that will shape the future of high performance computing. Submissions will be accepted through Dec. 22, 2017. The research paper sessions will be held from Monday, June 25, through Wednesday, June 27, 2018.”

Argonne’s Data Science Program Doubles Down with New Projects

Today Argonne announced that the ALCF Data Science Program (ADSP) has awarded computing time to four new projects, bringing the total number of ADSP projects for 2017-2018 to eight. All four of the program’s inaugural projects were also renewed. “The new project award recipients include an industry-based deep learning project; a national laboratory-based cosmology workflow project; and two university-based projects: one that uses machine-learning for materials discovery, and a deep-learning computer science project.”

Take Our HPC & AI Survey and Win an Echo Show Device

The rise of AI could potentially spur huge growth for the High Performance Computing market, but what kinds of results are your peers already getting right now? There is one way to find out–by taking our HPC & AI Survey. In return, we’ll send you a free report with the results and enter your name in a drawing to win one of two Echo Show devices with Amazon Alexa technology.

A Vision for Exascale: Simulation, Data and Learning

Rick Stevens gave this talk at the recent ATPESC training program. “The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses.”

How Can We Bring Apps to Racks?

In this special guest feature, Rosemary Dr Rosemary Francis from Ellexus describes why the customized nature of HPC is not a sustainable path forward for the next generation. “The downside is that many of our systems and tools are inaccessible to non-expert users. For example, deep learning is bringing more and more scientists closer towards HPC, but while they bring their knowledge, they also bring their high expectations for what they believe IT can do and not necessarily an understanding of how it works.”

Multiscale Dataflow Computing: Competitive Advantage at the Exascale Frontier

“This talk will explain the motivation behind dataflow computing to escape the end of frequency scaling in the push to exascale machines, introduce the Maxeler dataflow ecosystem including MaxJ code and DFE hardware, and demonstrate the application of dataflow principles to a specific HPC software package (Quantum ESPRESSO).”