Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.

Podcast: Seeing the Black Hole with Big Data

In this podcast, the Radio Free HPC team discusses how the news of the cool visualization of an actual black hole leads to interesting issues in HPC land. “The real point: the daunting 1.75 PB of raw data from each telescope meant a lot of physical drives that had to be flown to the data center. Henry leads a discussion about the race between bandwidth and data size.”

NEC-X Opens Vector Engine Data Acceleration Center in Silicon Valley

Today NEC-X launched the Vector Engine Data Acceleration Center (VEDAC) at its Silicon Valley facility. This new VEDAC is one of the company’s many offerings to innovators, makers and change agents. The NEC X organization is focused on fostering big data innovations using NEC’s emerging technologies while tapping into Silicon Valley’s rich ecosystem. “We are gratified to see the developing innovations that are taking advantage of the cutting-edge technologies from NEC’s laboratories.”

Wolfram Research Releases Mathematica Version 12 for Advanced Data Science

Today Wolfram Research released Version 12 of Mathematica for advanced data science and computational discovery. “After three decades of continuous R&D and the introduction of Mathematica Version 1.0, Wolfram Research has released its most powerful software offering with Version 12 of Wolfram Language, the symbolic backbone of Mathematica. The latest version includes over a thousand new functions and features for multiparadigm data science, automated machine learning, and blockchain manipulation for modern software development and technical computing.”

Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer

Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe. “Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community.”

Inspur to Offer BeeGFS Storage Systems for HPC and AI Clusters

Today Inspur announced that it will offer integrated storage solutions with the BeeGFS filesystem. BeeGFS, a leading parallel cluster file system with a distributed metadata architecture, has gained global acclaim for its usability, scalability and powerful metadata processing functions. “BeeGFS has unique advantages in terms of usability, flexibility and performance,” said Liu Jun, General Manager of AI&HPC, Inspur. “It can easily adapt to the different business needs of HPC and AI users. The cooperation between Inspur and ThinkParQ will provide our HPC and AI cluster solutions users with an integrated BeeGFS system and a range of high-quality services, helping them to improve efficiency with BeeGFS.”

Agenda Posted for MSST Mass Storage Conference in May

The Massive Storage Systems and Technology Conference (MSST) posted their preliminary speaker agenda. Keynote speakers include Margo Seltzer and Mark Kryder, along with a five-day agenda of invited and peer research talks and tutorials, May 20-24 in Santa Clara, California. “MSST 2019 will focus on current challenges and future trends in distributed storage system technologies,” said Meghan Wingate McClelland, Communications Chair of MSST.”

Video: The Game Changing Post-K Supercomputer for HPC, Big Data, and Ai

Satoshi Matsuoka from RIKEN gave this talk at the Rice Oil & Gas Conference. “Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure.”

Video: A Fast, Scaleable HPC Engine for Data Ingest

David Wade from Integral Engineering gave this talk at the Stanford HPC Conference. “In this talk, a design is sketched for an engine to ingest data from the IOT massively into a cluster for analysis, storage and transformation using COTS methods from High Performance Computing techniques in hardware and software.”

Scalable Machine Learning: The Role of Stratified Data Sharding

Srinivasan Parthasarathy from Ohio State University gave this talk at the Stanford HPC Conference. “With the increasing popularity of structured data stores, social networks and Web 2.0 and 3.0 applications, complex data formats, such as trees and graphs, are becoming ubiquitous. I will discuss a critical element at the heart of this challenge relates to the sharding, placement, storage and access of such tera- and peta- scale data.”