In this video from ISC 2018, Takeo Hosomi from NEC describes how vector computing can accelerate Machine Learning workloads.
Machine learning is the key technology for data analytics and artificial intelligence. Recent progress in this field opens opportunities for a wide variety of new applications. Our department has been at the forefront of developments in such areas as deep learning, support vector machines and semantic analysis for over a decade. Many of our technologies have been integrated in innovative products and services of NEC.”
NEC has a long history in High Performance Computing. Starting from the early 1980s, NEC has developed a product line of Vector Computers. With the rise of Linux based clusters NEC has provided complex solutions since the 1990s. NEC continues to innovate in this field with new solutions, in hardware, software and concepts. Now NEC can announce the next major milestone of its supercomputing strategy, the SX-Aurora TSUBASA.
I’m only interested in the performance of one algorithm, the Walsh Hadamard transform. Which is the rate limiting step in most of what I do. The out of place algorithm is extremely simple, go through the input vector pairwise, putting the sum of the two values sequentially in the lower half of a new array and the difference of the two values in the upper half. Repeat log base 2 (n) times.
There is also an in place algorithm. I don’t know which would be best for the NEC hardware? I’ll try to find a data-sheet.
The WHT is the simplest algorithm in computer science that is least known and least appreciated. You can use it for random projections, extreme learning machines, data multiplexing, dimension reduction, compressive sensing etc.
Getting hardware manufacturers interested is difficult but that is because the research community are not up to speed on it. However build it and maybe the researchers will use it.