It was certainly not my intent to demean your work which I found extremely valuable for validating and improving the background for my patent for the mitigation of floating point error.

I certainly recognize the difficulty you have had having your work recognized. Making serious contributions in business, protected by NDA, and in government, protected by security classifications, makes it difficult for us to be recognized in the academic world. Yet you and I have strived to solve this problem that has cost lives and likely others where we have no way of attributing the loss of life to floating point error.

I have reread your paper and there are a number of highlighted places, reminding me of how valuable your paper has been for me.

The format for storing e of the couple (x,e), 2.1 of your paper, however, is unclear to me. Differing from interval arithmetic, this could provide a more rapid evaluation of the precision of the result.

I have found that during the computation of floating point operations, there are two kinds of error that develop and interact, rounding and cancelation, where rounding is a linear error and cancelation is an exponential error. For a long time I’ve thought of this as an “Apples and Oranges” kind of problem until I realized that I could compute and store the logarithm of an error bound. It is a bit tricky computing the logarithm of the accumulated rounding error, but I have a scheme that works pretty well.

I would have liked to have seen in your paper examples of problems like:

1000 – 999.1, or Sum(0.1, i=1,500)

With bounded floating point these look like:

W:\>bfp64 1000 – 999.1

0.90000000

W:\>bfp64 500 S .1

50.00000

Accurate to +-1 ulp (base 10).

It seems to me that nearly all attempts at mitigating floating point error simply ignore cancelation (with an occasional mention of “instability”). Goldberg 1991, What Every Computer Scientist Should Know … defines “catastrophic cancellation” and “benign cancellation” but I have found that there is an accumulation of benign cancellation that sometimes overshadows even voluminous rounding errors.

Apparently you have read at least some of my patent and I really appreciate that. Most scholars do not appreciate the patent process and the effort required to develop a rigorous patent. The patent that was actually granted was the 3rd submission. The submission date was the date of that 3rd submission.

Thank you for your comments, perhaps we can form another line of communication via my website, BoundedFloatingPoint.com.

Alan A. Jorgensen, BS EE, Ph.D. CS

My original work dates back to 1991!

In Jan 2012 I published on arxiv the paper that is cited: https://arxiv.org/abs/1201.5975, which is a revision of the previous work, taking into account the reception that followed and subsequent developments.

To say the truth, although this is a decade-old and painful problem, it has not received a great attention by researchers in the field, perhaps because of the false conviction that there is little to do with it and that we just have to get along with it, and after all the great majority of computational problems are not so critical, so that emphasis so far has been placed in speed of computation, neglecting robustness and accuracy.

Therefore I cannot count a lot of citations, so that I am glad of this one, even if in negative terms.

However I don’t think that Prof. Jorgensen’s judgment of my work is fair!

I understand that in a patent one has to emphasize the merit of what is proposed while placing in a bad light other alternatives, but it would be great to be fair also in this circumstances.

In his patent he liquidates my work with a few words: “This technique increases required storage space, adds computation time and does not provide bounds for the error.”

Also if this is literally true it is not that bad as one may think just reading this few words.

Well, there is no free meal in this world, if we have additional information we need space for it, right? This is true also for the invention at hand: “The present invention makes a slight decrease in the maximum number of bits available for the significand for real number representation in order to accommodate space for error information”.

A time penalty also exists: “the present invention provides error computation in real time with, at most, a small increase in computation time”.

Therefore, as far as space and time are concerned, the situation seems to me not much different from the method that I proposed long time ago, because if it’s true that a software implementation of my method would certainly slow down computations significantly, this should not be true for a specialized hardware implementation, but this is not quantifiable, because no study on this subject has been carried out.

Finally, regarding error bounds, it is true that my method doesn’t provide error bounds, however it can ensure computations within given error bounds! Which, in my opinion, is what really matters.

In my method, when an ill conditioned problem is detected precision is automatically extended as much as required in order to respect given error bounds. It is not clear instead what we do here once a loss in precision is detected.

However this is no more my business and I don’t have time the go into the details of the patent.

To conclude, I wish to Prof. Jorgensen all the best and success with his patent, I would appreciate however if he would recognize that my method is not that bad, but it’s just another ways of doing things with its cons but also its pros. ]]>

I am wondering if you published this method? A quick Google search of your name and “floating point” did not yield any results. I would be interested in more details of your solution to this problem, though it appears to me to be of the class of variable length floating point that in general has severe issues with cancellation error.

]]>The patent office does not evaluate the workability of patents submitted, but rather whether they are new and unique. My patent meets those requirements.

]]>I was concerned about his solution to floating point error when I heard of his book and purchased and read it immediately and commented in several venues.

Though his book was widely read, his solution to floating point error is not accepted in the industry. Who manufactures Unum processors? Who even uses them to do real substantive work?

Professor Kahan has published counter arguments, and though his debate with Professor Kahan was recorded poorly for YouTube, his arguments are cogent.

My explanation to Professor Kahan as to why Unums will not work is as follows:: Every Unum calculation has the potential for producing a data value of a different data type. I’m using common definition of “data type” here as meaning the format of the representation of data. The proposes solution to this issue of establishing sets for each data time will not work either because the number of data types grows exponentially. And, as with every other method of representing floating point error, cancellation is represented badly or even ignored.

I stand by the claims of my patent: It uniquely provides an efficient, real time, solution to bounding the representation of real number in computers that includes accommodation of both cancellation and rounding error and provides real time notification of loss of the defined required accuracy.

If you have read the patent in detail to see how it functions, and have comments on that design, I would appreciate that.

]]>