I'm reading David Goldberg's *What Every Computer Scientist Should Know About Floating Point Arithmetic* paper, and I'm confused by one of the inequalities (2):

```
(1/2)B^-p <= (1/2)ulp <= (B/2)B^-p
```

I follow the reasoning behind the left hand side and the right hand side, but not the middle. As I understand it, ULPs are basically what you get if you read the bits of two floats of the same sign as two ints and subtracted them. So the first layer of confusion is these inequalities don't have the same units, ULP vs just significand digits. But I'm guessing he means if we translated a ULP into a significand where we just had a 1 in the last place and divided that by 2. So then I thought about how to express that number in terms of B.

For a significand `d.dd...dd`

I would expect that a significand of 1 ulp is `0.00...01`

(I'm using Goldberg's confusing digit notation here... even though those d's aren't subscripted they're distinct). Since `p`

is the number of digits in the significand, and we're looking to set the last digit to 1 (or `B/2`

in general), I'd expect:

```
1ulp = 1.00 * B^(-p+1)
```

The `+1`

being because the digit on the left hand side of the decimal counts as part of p. If I substitute that into the original inequality I get:

```
(1/2)B^-p <= (1/2)B^(-p+1) <= (B/2)B^-p
(1/2)B^-p <= (1/2)B^(-p+1) <= ((B*B^(-p))/2)
(1/2)B^-p <= (1/2)B^(-p+1) <= ((B^(-p+1))/2)
(1/2)B^-p <= (1/2)B^(-p+1) = (1/2)B^(-p+1)
```

And then the middle and the right hand side are always equal, which makes me think I'm doing something wrong. Any ideas?

`B^(-p+1) * B^e`

, and I can separately understand why error in general would be bounded by (1/2)ulp. So it makes sense even formally to write this inequality, at least if all the terms in this case would be multiplied by`B^e`

, but those cancel and you're back to the original inequality, which has the property when I work it out that the middle and right are equal, again still leaving me thinking I'm missing something.