MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Nvidia in the Driver’s Seat for Deep Learning

In this special guest feature, Robert Roe from Scientific Computing World describes why Nvidia is in the driver’s seat for Deep Learning. “Nvidia CEO Jen-Hsun Huang’s theme for the opening keynote was based on “a new computing model.” Huang explained that Nvidia builds computing technologies for the most demanding computer users in the world and that the most demanding applications require GPU acceleration. ‘The computers you need aren’t run of the mill. You need supercharged computing, GPU accelerated computing’ said Huang.”

Cowboy Supercomputer Powers Research at Oklahoma State

In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”

Radio Free HPC Trip Reports from ASC16 & MSST

In this podcast, the Radio Free HPC team recaps the ASC16 Student Cluster Competition in China and the 2016 MSST Conference in Santa Clara. Dan spent a week in Wuxi interviewing ASC16 student teams, he came back impressed with the Linpack benchmark tricks from the team at Zhejiang University, who set a new student LINPACK record with 12.03 TFlop/s. Meanwhile, Rich was in Santa Clara for the MSST conference, where he captured two days of talks on Mass Storage Technologies.

Video: Accelerating Code at the GPU Hackathon in Delaware

In this video from the GPU Hackathon at the University of Delaware, attendees tune their code to accelerate their application performance. The 5-day intensive GPU programming Hackathon was held in collaboration with Oak Ridge National Lab (ORNL). “Thanks to a partnership with NASA Langley Research Center, Oak Ridge National Laboratory, National Cancer Institute, National Institutes of Health (NIH), Brookhaven National Laboratory and the UD College of Engineering, UD students had access to the world’s second largest supercomputer — the Titan — to help solve real-world problems.”

GPU-Powered Systems Take Top Spot & Set Performance Records at ASC16

Over at the Nvidia Blog, George Millington writes that, the fourth consecutive year, the Nvidia Tesla Accelerated Computing Platform helped set new milestones in the Asia Student Supercomputer Challenge, the world’s largest supercomputer competition.

Hewlett Packard Enterprise Packs 8 GPUs into Apollo 6500 Server

In this video from the 2016 GPU Technology Conference, Greg Schmidt from Hewlett Packard Enterprise describes the new Apollo 6500 server. “With up to eight high performance NVIDIA GPU cards designed for maximum transfer bandwidth, the HPE Apollo 6500 System is purpose-built for deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.”

How HPE Makes GPUs Easier to Program for Data Scientists

In this video from the 2016 GPU Technology Conference, Rich Friedrich from Hewlett Packard Enterprise describes how the company makes it easier for Data Scientists to program GPUs. “In April, HPE announced a public, open-source version of the platform called the Cognitive Computing Toolkit. Instead of relying on the traditional CPUs that power most computers, the Toolkit runs on graphics processing units (GPUs), inexpensive chips designed for video game applications.”

Video: AMD ROC – Radeon Open Compute Platform

Gregory Stoner from AMD presented this talk at the HPC User Forum. “With the announcement of the Boltzmann Initiative and the recent releases of ROCK and ROCR, AMD has ushered in a new era of Heterogeneous Computing. The Boltzmann initiative exposes cutting edge compute capabilities and features on targeted AMD/ATI Radeon discrete GPUs through an open source software stack. The Boltzmann stack is comprised of several components based on open standards, but extended so important hardware capabilities are not hidden by the implementation.”

Exxact to Distribute NVIDIA DGX-1 Deep Learning System

The NVIDIA DGX-1 features up to 170 teraflops of half precision (FP16) peak performance, 8 Tesla P100 GPU accelerators with 16GB of memory per GPU, 7TB SSD DL Cache, and a NVLink Hybrid Cube Mesh. Packaged with fully integrated hardware and easily deployed software, it is the world’s first system built specifically for deep learning and with NVIDIA’s revolutionary, Pascal-powered Tesla P100 accelerators, interconnected with NVIDIA’s NVLink. NVIDIA designed the DGX-1 to meet the never-ending computing demands of artificial intelligence and claims it can provide the throughput of 250 CPU-based servers delivered via a single box.

Radio Free HPC Recaps the GPU Technology Conference

In this podcast, the Radio Free HPC team recaps the GPU Technology Conference, which wrapped up last week in San Jose.
Since Rich is traveling around in some desert somewhere, Dan and Henry go it alone and discuss the new Pascal (P1000) GPU, NVIDIA’s new server, and what happened at the concurrent OpenPOWER conference.”