Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: NVIDIA Magnum IO Moves Big Data Faster than Previously Possible

Today NVIDIA introduced NVIDIA Magnum IO, a suite of software to help data scientists and AI and high performance computing researchers process massive amounts of data in minutes, rather than hours. “Optimized to eliminate storage and input/output bottlenecks, Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC workloads.”

HPE and Cray Unveil HPC and AI Solutions Optimized for the Exascale Era

Today HPE announced it will deliver the industry’s most comprehensive HPC and AI portfolio for the exascale era, which is characterized by explosive data growth and new converged workloads such as HPC, AI, and analytics. “The addition of Cray, Inc., which HPE recently acquired, bolsters HPE’s HPC and AI solutions to now encompass an end-to-end supercomputing architecture across compute, interconnect, software, storage and services, delivered on premises, hybrid or as-a-Service. Now every enterprise can leverage the same foundational HPC technologies that power the world’s fastest systems, and integrate them into their data centers to unlock insights and fuel new discovery.”

DDN Launches New Data Management Capabilities and Platforms for AI and HPC

Today DDN announced new infrastructure and multicloud solutions ahead of its return to SC19 in Denver. “We are adding serious data management, collaboration and security capabilities to the most scalable file solution in the world. EXA5 gives you mission critical availability whilst consistently performing at scale” said James Coomer, senior vice president of product, DDN. “Our 20 years’ experience in delivering the most powerful at-scale data platforms is all baked into EXA5. We outperform everything on the market and now we do so with unmatched capability.”

Panasas to Showcase “Fastest HPC Parallel File System at any Price-Point” at SC19

“The next generation of PanFS on ActiveStor Ultra offers unlimited performance scaling in 4 GB/s building blocks, utilizing multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring consistently high performance regardless of workload.”

Cortical.io Demonstrates Real-time Semantic Supercomputing for NLU

Today Cortical.io announced the debut of a new class of high-performance enterprise applications based on “Semantic Supercomputing.” Semantic Supercomputing combines Cortical.io AI-based NLU software inspired by neuroscience with hardware acceleration to create new solutions to understand and process streams of natural language content at massive scale in real time. “We are taking the concept of supercomputing to the next level with the introduction of Semantic Supercomputing and the ability to deliver real-time processing of semantic content.”

Department of Energy to Showcase World-Leading Science at SC19

The DOE’s national laboratories will be showcased at SC19 next week in Denver, CO. “Computational scientists from DOE laboratories have been involved in the conference since it began in 1988 and this year’s event is no different. Experts from the 17 national laboratories will be sharing a booth featuring speakers, presentations, demonstrations, discussions, and simulations. DOE booth #925 will also feature a display of high performance computing artifacts from past, present and future systems. Lab experts will also contribute to the SC19 conference program by leading tutorials, presenting technical papers, speaking at workshops, leading birds-of-a-feather discussions, and sharing ideas in panel discussions.”

ETH Zurich up for Best Paper at SC19 with Lossy Compression for Large Graphs

A team of researchers at ETH Zurich are working on a novel approach to solving increasingly large graph problems. “As the size of graph datasets grows larger, a question arises: Does one need to store and process the exact input graph datasets to ensure precise outcomes of important graph algorithms? After an extensive investigation into this question, the ETH researchers have been nominated for the Best Paper and Best Student Paper Awards at SC19.”

Keys to Success for AI in Modeling and Simulation

In this special guest feature from Scientific Computing World, Robert Roe interviews Loren Dean from Mathworks on the use of AI in modeling and simulation. “If you just focus on AI algorithms, you generally don’t succeed. It is more than just developing your intelligent algorithms, and it’s more than just adding AI – you really need to look at it in the context of the broader system being built and how to intelligently improve it.”

Production Trial Shows Global Science Possible with CAE-1 100Gbps link

In early November, A*CRC, ICM, and Zettar conducted a production trial over the newly built Collaboration Asia Europe-1 (CAE-1) 100Gbps link connecting Europe and Singapore. “The project has established a historical first,” said Zettar CEO Chin Fang. “For the first time over the newly built CAE-1 link, with a production setup at ICM end, it has shown that moving data at great speed and scale between Poland (and thus Eastern Europe) and Singapore is a reality. Furthermore, although the project was initiated only in mid-October, all goals have been reached and a few new grounds have also been broken as well. It is also a true international collaboration.”

Accelerate Big Data and HPC applications with FPGAs using JupyterHub

Today InAccel annnounced that it has integrated JupyterHub into the company’s adaptive acceleration platform for FPGAs. InAccel provides an FPGA resource manager that allows the instant deployment, scaling and virtualization of FPGAs making easier than ever the utilization of FPGA clusters for applications like machine learning, data processing, data analytics and many more HPC workloads.