MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Supercomputing the Human Heart

Different views of the reconstructed finite element (FE) model for the ovine mitral valve apparatus with anatomically accurate leaflets represented by shell elements.

Over at TACC, Jorge Salazar writes that new supercomputer simulations are helping doctors improve the repair and replacement of heart valves. “New supercomputer models have come closer than ever to capturing the behavior of normal human heart valves and their replacements, according to recent studies by groups including scientists at the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin and the Department of Mechanical Engineering atIowa State University.”

Podcast: HP & Intel Accelerate HPC Solutions

Bill Mannel, VP & GM of HPC & Big Data Business Unit at HP

In this Intel Chip Chat podcast, Bill Mannel from HP stops by to discuss the growing demand for high performance computing solutions and the innovative use of HPC to manage big data. He highlights an alliance between Intel and HP that will accelerate HPC and big data solutions tailored to meet the latest needs and workloads of HPC customers, leading with customized vertical solutions.

Slidecast: Seagate’s New Nytro Flash Products

kent

“These new flash products greatly expand the range of our total product portfolio and demonstrate how Seagate’s acquisition of the LSI flash technologies is paying off. The Nytro XF1440/XM1440 SSDs deliver the highest performance in the smallest power envelope. The XP6500 flash accelerator card provides ultra-low latency capability for applications that require fast logging and produce significantly higher transactions per second,–something today’s applications demand.”

Slidecast: IBM Platform Data Manager for LSF

LSF

“IBM Platform Data Manager for LSF takes control of data transfers to help organizations improve data throughput and lower costs by minimizing wasted compute cycles and conserving disk space. Platform Data Manager automates the transfer of data used by application workloads running on IBM Platform LSF clusters and the cloud, bringing frequently used data closer to compute resources by storing it in a smart, managed cache that can be shared among users and workloads.”

Radio Free HPC Looks at 3D XPoint Non-Volatile Memory

bubble

In this video, the Radio Free HPC team looks at the newly announced 3D XPoint technology from Intel and Micron. “3D XPoint ushers in a new class of non-volatile memory that significantly reduces latencies, allowing much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage.”

Slidecast: IBM High Performance Services for Technical Computing in the Cloud

Slide1

In this slidecast, Chris Porter and Jeff Kamiol from IBM describe how IBM High Performance Services deliver versatile, application-ready clusters in the cloud for organizations that need to quickly and economically add computing capacity for high performance application workloads.

Radio Free HPC Looks Back at ISC 2015

rfhpc

In this video, Dan Olds and Rich Brueckner from Radio Free HPC discuss the latest news in High Performance Computing from the ISC 2015 conference in Frankfurt, Germany.

Slidecast: HPC & Big Data Update from HP

imgres

“As data explodes in volume, velocity and variety, and the processing requirements to address business challenges become more sophisticated, the line between traditional and high performance computing is blurring,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP Servers. “With this alliance, we are giving customers access to the technologies and solutions as well as the intellectual property, portfolio services and engineering support needed to evolve their compute infrastructure to capitalize on a data driven environment.”

Radio Free HPC Looks at Supercomputing Global Flood Maps

bubble

In this podcast, the Radio Free HPC team looks at how the KatRisk startup is using GPUs on the Titan supercomputer to calculate global flood maps. “KatRisk develops event-based probabilistic models to quantify portfolio aggregate losses and exceeding probability curves. Their goal is to develop models that fully correlate all sources of flood loss including explicit consideration of tropical cyclone rainfall and storm surge.”

Podcast: Supercomputing Flood Maps Using the Titan Supercomputer

st.louis-1_custom-72d5d669bc7a3281e6f40e340f6dd08b079e6154-s1500-c85

In this NPR podcast, Dag Lohmann describes how his startup company called KatRisk is using the Titan supercomputer at ORNL to create detailed flood maps for use by insurance companies.