Angstrom Microsystems Announces xBLAS for GPUs

Print Friendly, PDF & Email

angstromAngstrom Microsystems today announced their release of xBLAS, a highly tuned implementation of the basic linear algebra subsystem.  The fun part is, it runs on GPUs.  From what I can gather from the parent article, xBLAS is compatible with the single precision ATLAS implementation of BLAS.  For those running apps compatible with ATLAS, no code changes are required.

Angstrom is excited to provide an Altas Blas compatible library that not only blows the doors off of existing performance numbers, it is designed to fit hand-in-glove with any existing customer software without any significant modifications,” said Lalit Jain, CEO of Angstrom Microsystems. “Angstrom has done the work to go the last mile, enabling the customer to rapidly deploy xBlas with their existing code base.”h

Angstrom claims that their implementation will deliver up to 300x improvement over ATLAS.  No word on which GPU they are using for the test.  From the link embedded in the parent article, I gather that they are referring directly to the blas_sgemm routine.  For those out there looking to utilize xBLAS for bumping your Top500 ranking, Linpack mostly relies on dgemm.  The one performance chart can be had here.

For more info, read the full article here.

Now, who’s up for a battle against CUDA?

Comments

  1. Jason Riedy says

    The XBLAS are the extended precision BLAS, see http://crd.lbl.gov/~xiaoye/XBLAS/ . That happens to be the first Google hit for “xblas”. And now a vendor has decided to take a dump on the name and make it mean “puny, imprecise BLAS”. sigh.

  2. Excellent bit of info Jason. Thanks for the post!