Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

Many new technologies used in High Performance Computing (HPC) have allowed new application areas to  become possible. Advances like multi-core, GPU, NVMe, and others have created application verticals that  include accelerator assisted HPC, GPU based Deep Learning, Fast storage and parallel file systems, and Big  Data Analytics systems. In this special insideHPC technology guide sponsored by our friends over at Tyan, we look at practical hardware design strategies for modern HPC workloads.

Innovations in Simulation, HPC, Big Data and AI at Teratec Digital Forum 2020 — Oct. 13-14

On October 13 and 14, the digital version of the next Teratec Forum will present a review of the latest international advances in simulation, HPC, Big Data and AI. The virtual exhibition will present the latest technologies from nearly 50 exhibitors, including manufacturers and publishers, suppliers and integrators of hardware, software and services solutions, universities […]

Qumulo Launches on AWS Outposts for File Storage and Data Management

SEATTLE – Sept. 15, 2020 – Qumulo, a cloud file data platform that helps organizations store and manage file data, today announced availability on AWS Outposts. AWS Outposts is a managed service that extends Amazon Web Services (AWS) infrastructure, AWS services, APIs, and tools to any datacenter, co-location space, or on-premises facility and is designed for […]

Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data.”

The Role of Middleware in Optimizing Vector Processing

A new whitepaper from NEC X delves into the world of unstructured data and explores how vector processors and their optimization software can help solve the challenges of wrangling the ever-growing volumes of data generated globally. “In short, vector processing with SX-Aurora TSUBASA will play a key role in changing the way big data is handled while stripping away the barriers to achieving even higher performance in the future.”

Is Your Storage Infrastructure Ready for the Coming AI Wave?

In this new whitepaper from our friends over at Panasas, we take a look at whether your storage infrastructure is ready for the robust requirements in support of AI workloads. AI promises to not only create entirely new industries, but it will also fundamentally change the way organizations large and small conduct business. IT planners need to start revising their storage infrastructure now to prepare the organization for the coming AI wave.

Case Study: Magseis Fairfield Uses a Sea of Data to Support Environmentally Responsible Energy Exploration

This whitepaper contains a compelling HPC data storage solution case study highlighting the use of Panasas ActiveStor® by Magseis Fairfield, a geophysics firm that specializes in providing seismic 3D and 4D data acquisition services to exploration and production (E&P) companies. The whitepaper, “Magseis Fairfield Uses a Sea of Data to Support Environmentally Responsible Energy Exploration,” […]