2

I'm reading David Goldberg's What Every Computer Scientist Should Know About Floating Point Arithmetic paper, and I'm confused by one of the inequalities (2):

(1/2)B^-p <= (1/2)ulp <= (B/2)B^-p

I follow the reasoning behind the left hand side and the right hand side, but not the middle. As I understand it, ULPs are basically what you get if you read the bits of two floats of the same sign as two ints and subtracted them. So the first layer of confusion is these inequalities don't have the same units, ULP vs just significand digits. But I'm guessing he means if we translated a ULP into a significand where we just had a 1 in the last place and divided that by 2. So then I thought about how to express that number in terms of B.

For a significand d.dd...dd I would expect that a significand of 1 ulp is 0.00...01 (I'm using Goldberg's confusing digit notation here... even though those d's aren't subscripted they're distinct). Since p is the number of digits in the significand, and we're looking to set the last digit to 1 (or B/2 in general), I'd expect:

1ulp = 1.00 * B^(-p+1)

The +1 being because the digit on the left hand side of the decimal counts as part of p. If I substitute that into the original inequality I get:

(1/2)B^-p <= (1/2)B^(-p+1) <= (B/2)B^-p
(1/2)B^-p <= (1/2)B^(-p+1) <= ((B*B^(-p))/2)
(1/2)B^-p <= (1/2)B^(-p+1) <= ((B^(-p+1))/2)
(1/2)B^-p <= (1/2)B^(-p+1) = (1/2)B^(-p+1)

And then the middle and the right hand side are always equal, which makes me think I'm doing something wrong. Any ideas?

2
  • All the inequation is trying to say is that the relative error for a correctly rounded operation (accurate to 1/2 ULP) is between (1/2)B^-p and (B/2)B^-p. The notation is very informal. Do not substitute anything for this “ulp” in the middle of it: 1) ULP is a function, see ens-lyon.fr/LIP/Pub/Rapports/RR/RR2005/RR2005-09.pdf for more considerations than you wanted to see 2) the ULP notation usually expresses absolute error, not relative. Aug 4, 2016 at 6:56
  • @PascalCuoq Why doesn't the notation work though? I can understand why the error is bounded by the left and right hand sides, and I can separately understand defining ULP as B^(-p+1) * B^e, and I can separately understand why error in general would be bounded by (1/2)ulp. So it makes sense even formally to write this inequality, at least if all the terms in this case would be multiplied by B^e, but those cancel and you're back to the original inequality, which has the property when I work it out that the middle and right are equal, again still leaving me thinking I'm missing something. Aug 9, 2016 at 15:34

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Browse other questions tagged or ask your own question.