Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

ESnet’s Science DMZ Could Help Transfer and Protect Medical Research Data

The Science DMZ architecture developed for moving large data sets quick and securely could be adapted to meet the needs of the medical research community. “Like other sciences, medical research is generating increasingly large datasets as doctors track health trends, the spread of diseases, genetic causes of illness and the like. Effectively using this data for efforts ranging from stopping the spread of deadly viruses to creating precision medicine treatments for individuals will be greatly accelerated by the secure sharing of the data, while also protecting individual privacy.”

Introducing the European EXDCI initiative for HPC

“The European Extreme Data & Computing Initiative (EXDCI) objective is to support the development and implementation of a common strategy for the European HPC Ecosystem. One of the main goals of the meeting in Bologna was to set up a roadmap for future developments, and for other parties who would like to participate in HPC research.”

Video: Revolution in Computer and Data-enabled Science and Engineering

Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. “Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice.”

Radio Free HPC Previews the SC17 Plenary on Smart Cities

In this podcast, the Radio Free HPC team looks at Smart Cities. As the featured topic this year at the SC17 Plenary, the Smart Cities initiative looks to improve the quality of life for residents using urban informatics and other technologies to improve the efficiency of services.

Video: Scientel Runs Record Breaking Calculation on Owens Cluster at OSC

In this video, Norman Kutemperor from Scientel describes how his company ran a record-setting big data problem on the Owens supcomputer at OSC.

“The Ohio Supercomputer Center recently displayed the power of its new Owens Cluster by running the single-largest scale calculation in the Center’s history. Scientel IT Corp used 16,800 cores of the Owens Cluster on May 24 to test database software optimized to run on supercomputer systems. The seamless run created 1.25 Terabytes of synthetic data.”

Multiscale Dataflow Computing: Competitive Advantage at the Exascale Frontier

“This talk will explain the motivation behind dataflow computing to escape the end of frequency scaling in the push to exascale machines, introduce the Maxeler dataflow ecosystem including MaxJ code and DFE hardware, and demonstrate the application of dataflow principles to a specific HPC software package (Quantum ESPRESSO).”

DeepL Deployes 5 Petaflop Supercomputer at Verne Global in Iceland

Today Verne Global announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus in Iceland. Designed to support DeepL’s artificial intelligence driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. “We are seeing growing interest from companies using AI tools, such as deep neural network (DNN) applications, to revolutionize how they move their businesses forward, create change, and elevate how we work, live and communicate.”

Oak Ridge Turns to Deep Learning for Big Data Problems

The Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project aims to use deep learning to assist researchers in making sense of massive datasets produced at the world’s most sophisticated scientific facilities. Deep learning is an area of machine learning that uses artificial neural networks to enable self-learning devices and platforms. The team, led by ORNL’s Thomas Potok, includes Robert Patton, Chris Symons, Steven Young and Catherine Schuman.

NSF Announces $17.7 Million Funding for Data Science Projects

Today the National Science Foundation (NSF) announced $17.7 million in funding for 12 Transdisciplinary Research in Principles of Data Science (TRIPODS) projects, which will bring together the statistics, mathematics and theoretical computer science communities to develop the foundations of data science. Conducted at 14 institutions in 11 states, these projects will promote long-term research and training activities in data science that transcend disciplinary boundaries. “Data is accelerating the pace of scientific discovery and innovation,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering (CISE). “These new TRIPODS projects will help build the theoretical foundations of data science that will enable continued data-driven discovery and breakthroughs across all fields of science and engineering.”

SC17 Invited Talk Preview: High Performance Machine Learning

Over at the SC17 Blog, Brian Ban begins his series of SC17 Session Previews with a look at a talk on High Performance Big Data. “Deep learning, using GPU clusters, is a clear example but many Machine Learning algorithms also need iteration, and HPC communication and optimizations.”