The Role of Middleware in Optimizing Vector Processing

Print Friendly, PDF & Email

Download the whitepaper.

A new whitepaper from NEC X delves into the world of unstructured data and explores how vector processors and their optimization software can help solve the challenges of wrangling the ever-growing volumes of data generated globally.

Big data brings big benefits but also big challenges for existing volume server compute architectures, because the data is less structured and difficult to process using traditional approaches. Efficiently revealing the hidden benefits and insights within the unstructured data requires new software tools, processors and memory configurations, without having to learn new coding languages.

NEC’s SX-Aurora TSUBASA vector computer architecture and Frovedis middleware address these challenges while scaling from a single PCIe card to an entire server at far less cost and with higher performance than an approach solely using scalar processors. In short, vector processing with SX-Aurora TSUBASA will play a key role in changing the way big data is handled while stripping away the barriers to achieving even higher performance in the future.

This whitepaper includes the following compelling sections that focus on the role of middleware in optimizing vector processing:

  • The Unstructured World of Big Data
  • Vectors Victorious
  • Moving into the Mainstream
  • Optimization with Frovedis
  • Proof of Performance
  • Summary

Any vector-based computer can perform only as well as the software employed to optimize it. For the SX-Aurora TSUBASA, this is achieved using open-source middleware called Frovedis (FRamework for VEctorized and DIStributed data analytics), the result of five years of development by NEC. Frovedis was crafted from the start to aid designers with the vector processor tools and begin implementing and deploying applications quickly.

Download the new whitepaper courtesy of NEC X, “The Role of Middleware in Optimizing Vector Processing” to understand how to address the situation where the combination of exponentially increasing data and the difficulty in working with it is exceeding the capabilities of data center compute platforms, especially for applications in which analysis must be performed in near real time.