John Gustafson presents: Beyond Floating Point – Next Generation Computer Arithmetic

Print Friendly, PDF & Email

John Gustafson from A*Star in Singapore

In this video, John Gustafson from the National University of Singapore presents: Beyond Floating Point: Next Generation Computer Arithmetic.

“A new data type called a “posit” is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits. For any bit width, they have a larger dynamic range, higher accuracy, better closure under arithmetic operations, and simpler exception-handling. For example, posits never overflow to infinity or underflow to zero, and there is no “Not-a-Number” (NaN) value. Posits should take up less space to implement in silicon than an IEEE float of the same size. With fewer gate delays per operation as well as lower silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. GPU accelerators, in particular, could do more arithmetic per watt and per dollar yet deliver superior answer quality.”

Gustafson describes a series of comprehensive benchmarks compares how many decimals of accuracy can be produced for a set number of bits-per-value, using various number formats. Low-precision posits provide a better solution than “approximate computing” methods that try to tolerate decreases in answer quality. High-precision posits provide better answers (more correct decimals) than floats of the same size, suggesting that in some cases, a 32-bit posit may do a better job than a 64-bit float. In other words, posits beat floats at their own game.

Dr. John L. Gustafson is an applied physicist and mathematician. He is a former Director at Intel Labs and former Chief Product Architect at AMD. A pioneer in high-performance computing, he introduced cluster computing in 1985 and first demonstrated scalable massively parallel performance on real applications in 1988. This became known as Gustafson’s Law, for which he won the inaugural ACM Gordon Bell Prize. He is also a recipient of the IEEE Computer Society’s Golden Core Award.

Sign up for our insideHPC Newsletter

Comments

  1. It seems like this should be adopted sooner rather than later for exascale. Can Fujitsu Intel and Nvidia integrate this into their microarchitecture? If so, it may solve some of the data locality issues as well as precision issues faced by their exascale architectures due out around 2020.

  2. Wonderful ideas.

    I was attracted to interval arithmetic for a while, but as the example shows, intervals keep expanding with each operation, so as to be useless.

    I wonder how fast(slow) an assembly implementation is compared to hardware floating point. What is the smallest size of posits where the arithmetic is about as fast as IEEE754 hardware? What is the accuracy at that size?

    Someone with FPGA skills should try and implement this.

    What a pity that floating point processors are integrated with main CPUs these days. If we still had separate floating point coprocessors, we could just drop a posit coprocessor in. (I know, interchip latency is why they’re integrated.)