Numascale Powers Big Data Analytics with Transtec

Print Friendly, PDF & Email

numascaleIn this video, Einar Rustad from Numascale describes how the company works with Transtec to deliver Big Data Analytics solutions.

At ISC 2015, Numascale announced record-breaking results from a shared memory system running the McCalpin STREAM Benchmark, a synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple vector kernels. Numascale’s cache coherent shared memory system, which was targeted for big data analytics, reached 10.06 TBytes/second for the Scale function. This feat ranked 53% higher than the next most scalable system on the list, which was only able to achieve 6.59 TBytes/second.

Numascale’s record-breaking system is the first part of a large cloud computing install at a North American customer data center facility for the analytics and simulation of sensor data combined with historical data. The system is being used to run analytic models that simulate complex dynamic behavior in a certain supply chain. Its data sets are large and the model uses both historical data as well as close to real-time information to predict behavior. The size of the data sets requires large memory short access times in order to be able to complete computations within deadlines.

Check out our complete coverage of ISC 2015 * Sign up for our insideHPC Newsletter.