Deep Learning Systems Analyze Periscope Streams on a Supercomputer

Print Friendly, PDF & Email

img02-32fc00b8bbff968b02f9ff899cfaef83Today Dextro, announced that Orange Silicon Valley will use Dextro’s Stream application to demonstrate the “Exascale” supercomputing platform, in collaboration with Echostreams and CocoLink. The Exascale one-unit system can do what previously took dozens of servers while making the learning speed of Dextro’s algorithms more than 5x faster. Stream, launched last Spring, uses Dextro’s powerful computer vision technology to analyze and interpret live streamed video, making live broadcast content online as searchable and categorized as the static web. The live demonstration will take place at the Echostreams booth #582 at SC15.

Every minute an incredible amount of video is uploaded for the world to see, but until Stream, there was no easy way to sift through all of the information to find personally interesting or relevant content,” said David Luan, CEO of Dextro. “This requires a ton of processing power, not only for the baseline analysis of high resolution images, but also in training the system to get better at identifying and categorizing content in real-time. The Exascale system can do what previously took dozens of servers into one unit, while making the learning speed of our algorithms more than five times faster. This is a revolutionary feat for power-intensive applications like ours.”

The Exascale platform is prototyped by Orange Silicon Valley in collaboration with Echostreams and CocoLink. This system aims at provisioning extreme computation density with 20 Nvidia GPUs in a single PCIe root complex. This pushes the envelope of a high performance server design with unmatched core density exceeding 60,000 CUDA cores in a single server and innovative thermal engineering without resorting to liquid cooling. This platform combined with world’s most advanced video training and analytics software marks a significant advancement in the endeavour towards accelerating A.I.

“It is now possible to run Deep Learning over massive volumes of video data at high-speed and also perform contextual analysis over several hundred streams in real-time. With our partners, we have prototyped an advanced video analytics capability that could efficiently exploit a supercomputer in a box at the edge of our network, we can thus envision a convergence between A.I. and Exascale, said Jerome Ladouar, VP Infrastructure, Technologies and Engineering at Orange. ” As content streaming volume increases exponentially over our pipes, hyper efficiency in our infrastructure will enable us to meet computationally demanding challenges in real-time and enhance our end-user experience,” 

Since launching in April 2015, Dextro’s Stream app has analyzed more than a million streams on the Periscope live streaming platform. Stream showcases in real-time which topics users are streaming most often, and how interests change throughout the day across time zones and across the globe. It is built on Dextro’s powerful computer vision API, which allows companies to automatically understand or curate the data deluge caused by countless hours of video content generated. Dextro’s technology moves beyond spotty or non-existing metadata like user-generated hashtags and instead focuses on actual visual content within the frame of the video.

We are pleased to demonstrate our HPC platform with advanced vision machine learning companies such as Dextro to fully harness the power of Nvidia GPU processors,” said Andy Lee, Director of Product Marketing from EchoStreams. “With the guidance from Orange Silicon Valley and partnership with CocoLink, we are able to create a robust, scalable, and high performance compute server.”

See our complete coverage of SC15 * Sign up for our insideHPC Newsletter

Comments

  1. Exascale. You keep using that word. I do not think it means what you think it means.