3578

Consider the following code:

0.1 + 0.2 == 0.3  ->  false
0.1 + 0.2         ->  0.30000000000000004

Why do these inaccuracies happen?

16
  • 180
    Floating point variables typically have this behaviour. It's caused by how they are stored in hardware. For more info check out the Wikipedia article on floating point numbers.
    – Ben S
    Feb 25, 2009 at 21:41
  • 79
    JavaScript treats decimals as floating point numbers, which means operations like addition might be subject to rounding error. You might want to take a look at this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
    – matt b
    Feb 25, 2009 at 21:42
  • 7
    Just for information, ALL numeric types in javascript are IEEE-754 Doubles. Apr 11, 2010 at 13:01
  • 12
    Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10. May 7, 2018 at 4:57
  • 4
    Simple explanation: 1/10 is periodic in binary (0.0 0011 0011 0011...) just like 1/3 is periodic in decimal (0.333...), so 1/10 can't be accurately represented by a floating point number.
    – ikegami
    Jan 7, 2020 at 19:14

31 Answers 31

1
2
2

I just saw this interesting issue around floating points:

Consider the following results:

error = (2**53+1) - int(float(2**53+1))
>>> (2**53+1) - int(float(2**53+1))
1

We can clearly see a breakpoint when 2**53+1 - all works fine until 2**53.

>>> (2**53) - int(float(2**53))
0

Enter image description here

This happens because of the double-precision binary: IEEE 754 double-precision binary floating-point format: binary64

From the Wikipedia page for Double-precision floating-point format:

Double-precision binary floating-point is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. As with single-precision floating-point format, it lacks precision on integer numbers when compared with an integer format of the same size. It is commonly known simply as double. The IEEE 754 standard specifies a binary64 as having:

  • Sign bit: 1 bit
  • Exponent: 11 bits
  • Significant precision: 53 bits (52 explicitly stored)

Enter image description here

The real value assumed by a given 64-bit double-precision datum with a given biased exponent and a 52-bit fraction is

Enter image description here

or

Enter image description here

Thanks to @a_guest for pointing that out to me.

1
2

Not the answer you're looking for? Browse other questions tagged or ask your own question.