Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


DeepL Deployes 5 Petaflop Supercomputer at Verne Global in Iceland

Today Verne Global announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus in Iceland. Designed to support DeepL’s artificial intelligence driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. “We are seeing growing interest from companies using AI tools, such as deep neural network (DNN) applications, to revolutionize how they move their businesses forward, create change, and elevate how we work, live and communicate.”

Oak Ridge Turns to Deep Learning for Big Data Problems

The Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project aims to use deep learning to assist researchers in making sense of massive datasets produced at the world’s most sophisticated scientific facilities. Deep learning is an area of machine learning that uses artificial neural networks to enable self-learning devices and platforms. The team, led by ORNL’s Thomas Potok, includes Robert Patton, Chris Symons, Steven Young and Catherine Schuman.

NSF Announces $17.7 Million Funding for Data Science Projects

Today the National Science Foundation (NSF) announced $17.7 million in funding for 12 Transdisciplinary Research in Principles of Data Science (TRIPODS) projects, which will bring together the statistics, mathematics and theoretical computer science communities to develop the foundations of data science. Conducted at 14 institutions in 11 states, these projects will promote long-term research and training activities in data science that transcend disciplinary boundaries. “Data is accelerating the pace of scientific discovery and innovation,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering (CISE). “These new TRIPODS projects will help build the theoretical foundations of data science that will enable continued data-driven discovery and breakthroughs across all fields of science and engineering.”

SC17 Invited Talk Preview: High Performance Machine Learning

Over at the SC17 Blog, Brian Ban begins his series of SC17 Session Previews with a look at a talk on High Performance Big Data. “Deep learning, using GPU clusters, is a clear example but many Machine Learning algorithms also need iteration, and HPC communication and optimizations.”

SC17 Panel Preview: How Serious Are We About the Convergence Between HPC and Big Data?

SC17 will feature a panel discussion entitled How Serious Are We About the Convergence Between HPC and Big Data? “The possible convergence between the third and fourth paradigms confronts the scientific community with both a daunting challenge and a unique opportunity. The challenge resides in the requirement to support both heterogeneous workloads with the same hardware architecture. The opportunity lies in creating a common software stack to accommodate the requirements of scientific simulations and big data applications productively while maximizing performance and throughput.

RCE Podcast Looks at NetCDF Network Common Data Format

In this RCE Podcast, Brock Palen and Jeff Squyres speak with the authors of NetCDF. NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. “Unidata’s Network Common Data Form (netCDF) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. It is also a community standard for sharing scientific data.”

DOE Helps Tackle Biology’s Big Data

Six proposals have been selected to participate in a new partnership between two U.S. Department of Energy (DOE) user facilities through the “Facilities Integrating Collaborations for User Science” (FICUS) initiative. The expertise and capabilities available at the DOE Joint Genome Institute (JGI) and the National Energy Research Scientific Computing Center (NERSC) – both at the Lawrence Berkeley National Laboratory (Berkeley Lab) – will help researchers explore the wealth of genomic and metagenomic data generated worldwide through access to supercomputing resources and computational science experts to accelerate discoveries.

Video: ddR – Distributed Data Structures in R

“A few weeks ago, we revealed ddR (Distributed Data-structures in R), an exciting new project started by R-Core, Hewlett Packard Enterprise, and others that provides a fresh new set of computational primitives for distributed and parallel computing in R. The package sets the seed for what may become a standardized and easy way to write parallel algorithms in R, regardless of the computational engine of choice.”

Teradata Acquires StackIQ

Today Teradata announced the acquisition of StackIQ, developers of one of the industry’s fastest bare metal software provisioning platforms which has managed the deployment of cloud and analytics software at millions of servers in data centers around the globe. The deal will leverage StackIQ’s expertise in open source software and large cluster provisioning to simplify and automate the deployment of Teradata Everywhere. Offering customers the speed and flexibility to deploy Teradata solutions across hybrid cloud environments, allows them to innovate quickly and build new analytical applications for their business. “Teradata prides itself on building and investing in solutions that make life easier for our customers,” said Oliver Ratzesberger, Executive Vice President and Chief Product Officer for Teradata. “Only the best, most innovative and applicable technology is added to our ecosystem, and StackIQ delivers with products that excel in their field. Adding StackIQ technology to IntelliFlex, IntelliBase and IntelliCloud will strengthen our capabilities and enable Teradata to redefine how systems are deployed and managed globally.”

Alan Turing Institute to Acquire Cray Urika-GX Graph Supercomputer

Today Cray announced the Company will provide a Cray Urika-GX system to the Alan Turing Institute. “The rise of data-intensive computing – where big data analytics, artificial intelligence, and supercomputing converge – has opened up a new domain of real-world, complex analytics applications, and the Cray Urika-GX gives our customers a powerful platform for solving this new class of data-intensive problems.”