Even simple numbers like 0.01, 0.02, 0.03, 0.04 ... 0.24 are not representable exactly as binary fractions, even if you had thousands of bits of precision in the mantissa, even if you had millions. If you count off inup 0.01 increments, .02, .03 ..., not until you get to 0.25 will you get the first fraction (in this sequence) representable in base10 and base2. But if If you tried that using FP, your 0.01 would have been slightly off, so the only way to add 25 of them up to a nice exact 0.25 would have required a long chain of causality involving guard bits and rounding. It's hard to predict so we throw up our hands and say "FP is inexact"., but that's not really true.

When we write in decimal, every fraction (specifically, every terminating decimal) is a rational number of the form

xa / (2n +x 5nm).

xa / 2n

replaced http://stackoverflow.com/ with https://stackoverflow.com/ URL Rewriter Bot

I love the Pizza answer by ChrisChris, because it describes the actual problem, not just the usual handwaving about "inaccuracy". If FP were simply "inaccurate", we could fix that and would have done it decades ago. The reason we haven't is because the FP format is compact and fast and it's the best way to crunch a lot of numbers. Also, it's a legacy from the space age and arms race and early attempts to solve big problems with very slow computers using small memory systems. (Sometimes, individual magnetic cores for 1-bit storage, but that's another story.)

Floating point arithmetic is exact, unfortunately, it doesn't match up well with our usual base-10 number representation, so it turns out we are often giving it input that is slightly off from what we wrote. Even

Even simple numbers like 0.01, 0.02, 0.03, 0.04 ... 0.24 are not representable exactly as binary fractions, even if you had thousands of bits of precision in the mantissa, even if you had millions. If you count off in 0.01 increments, not until you get to 0.25 will you get the first fraction (in this sequence) representable in base10 and base2. But if you tried that using FP, your 0.01 would have been slightly off, so the only way to add 25 of them up to a nice exact 0.25 would have required a long chain of causality involving guard bits and rounding. It's hard to predict so we throw up our hands and say "FP is inexact" even though that's more of a result than a cause."FP is inexact".

The end result is that we often askWe constantly give the FP hardware to do something that seems simple in base 10 but is a repeating fraction in base 2.

So in decimal, we can't represent 11/33. Because base 10 includes 2 as a prime factor, every number we can write as a binary fraction also can be written as a base 10 fraction. However, since we count in fractions of 10, hardly anything we write as a base10 fraction is representable in binary. In the range from 0.01, 0.02, 0.03 ... 0.99, only three numbers can be represented in our FP format: 0.25, 0.50, and 0.75, because they are 1/4, 1/2, and 3/4, all numbers with a prime factor using only the 2n term.

In base10 we can't represent 11/33. But in binary, we can't do 11/1010 or 11/33.

Worse than that,So while every binary fraction can be written in decimal, the reverse is not true. And in fact most decimal fractions repeat in binary.