Consider the following code:

```
0.1 + 0.2 == 0.3 -> false
```

```
0.1 + 0.2 -> 0.30000000000000004
```

Why do these inaccuracies happen?

Consider the following code:

```
0.1 + 0.2 == 0.3 -> false
```

```
0.1 + 0.2 -> 0.30000000000000004
```

Why do these inaccuracies happen?

Consider the following results:

```
error = (2**53+1) - int(float(2**53+1))
```

```
>>> (2**53+1) - int(float(2**53+1))
1
```

We can clearly see a breakpoint when `2**53+1`

- all works fine until `2**53`

.

```
>>> (2**53) - int(float(2**53))
0
```

**This happens because of the double-precision binary: IEEE 754 double-precision binary floating-point format: binary64**

From the Wikipedia page for Double-precision floating-point format:

Double-precision binary floating-point is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. As with single-precision floating-point format, it lacks precision on integer numbers when compared with an integer format of the same size. It is commonly known simply as double. The IEEE 754 standard specifies a binary64 as having:

- Sign bit: 1 bit
- Exponent: 11 bits
- Significant precision: 53 bits (52 explicitly stored)
The real value assumed by a given 64-bit double-precision datum with a given biased exponent and a 52-bit fraction is

or

*Thanks to @a_guest for pointing that out to me.*

64-bitfloating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working inBase 2while decimal isBase 10.11more comments