Floating point arithmetic is exact, unfortunately, it doesn't match up well with our usual base-10 number representation, so it turns out we are often giving it input that is slightly off from what we wrote. Even
Even simple numbers like 0.01, 0.02, 0.03, 0.04 ... 0.24 are not representable exactly as binary fractions, even if you had thousands of bits of precision in the mantissa, even if you had millions. If you count off in 0.01 increments, not until you get to 0.25 will you get the first fraction (in this sequence) representable in base10 and base2. But if you tried that using FP, your 0.01 would have been slightly off, so the only way to add 25 of them up to a nice exact 0.25 would have required a long chain of causality involving guard bits and rounding. It's hard to predict so we throw up our hands and say "FP is inexact" even though that's more of a result than a cause."FP is inexact".
The end result is that we often askWe constantly give the FP hardware to do something that seems simple in base 10 but is a repeating fraction in base 2.
So in decimal, we can't represent 11/33. Because base 10 includes 2 as a prime factor, every number we can write as a binary fraction also can be written as a base 10 fraction. However, since we count in fractions of 10, hardly anything we write as a base10 fraction is representable in binary. In the range from 0.01, 0.02, 0.03 ... 0.99, only three numbers can be represented in our FP format: 0.25, 0.50, and 0.75, because they are 1/4, 1/2, and 3/4, all numbers with a prime factor using only the 2n term.
In base10 we can't represent 11/33. But in binary, we can't do 11/1010 or 11/33.
Worse than that,So while every binary fraction can be written in decimal, the reverse is not true. And in fact most decimal fractions repeat in binary.