# Timeline for Is floating point math broken?

### Current License: CC BY-SA 4.0

75 events
when toggle format what by license comment
Mar 23 at 11:26 comment I was just looking for a quick workaround to sum up a field in json array, so I thought it might help someone else with an example: stackblitz.com/edit/typescript-hb2dpm
Dec 12, 2021 at 8:15 comment The curse of decimal education: why don't you try 0.125+0.25 or 0.0625+0.125 ? (I'm a metric system [hence decimal] user, but if I had a choice, I would use the metric units with the dyadic system that is [partially] used in imperial units: like 1/2, 1/4, 1/8, etc.) Not only the arithmetic is easier, but it's easier to visualise the outcome of halving/doubling than that of dividing/multiplying by 10.
Sep 16, 2021 at 0:28 answer timeline score: 7
Jan 23, 2021 at 7:17 comment Wrote a blog post that explores this: Floating Point Basics. Shorter than the otherwise great "What every computer scientist should know about floating-point arithmetic" already mentioned.
Aug 20, 2020 at 15:38 answer timeline score: 4
Aug 3, 2020 at 15:03 answer timeline score: 7
Jan 7, 2020 at 19:14 comment Simple explanation: 1/10 is periodic in binary (0.0 0011 0011 0011...) just like 1/3 is periodic in decimal (0.333...), so 1/10 can't be accurately represented by a floating point number.
Dec 25, 2019 at 19:42 comment
S Nov 4, 2019 at 13:05 history bounty ended
S Nov 4, 2019 at 13:05 history notice removed
S Nov 3, 2019 at 12:19 history bounty started
S Nov 3, 2019 at 12:19 history notice added Reward existing answer
Oct 5, 2019 at 21:46 answer timeline score: 2
Sep 27, 2019 at 18:30 comment A recent (September 2019) article about Numbers limit how accurately digital computers model chaos.
May 8, 2019 at 18:43 comment @BobJarvis don't be disingenuous: COBOL certainly has the ability to add decimal numbers.
May 8, 2019 at 18:37 comment @RonJohn: in that case, one can generally say about any language that as long as you don't use a feature that has special considerations then you don't have to consider them. :-)
May 7, 2019 at 19:45 answer timeline score: 4
Apr 29, 2019 at 14:53 history rollback
Rollback to Revision 16
Apr 25, 2019 at 20:35 history edited
Apr 25, 2019 at 19:39 history edited
edited title
Apr 22, 2019 at 1:02 answer timeline score: 5
Feb 21, 2019 at 14:53 comment @Ender This is not a contradiction, IEEE 754 doubles have a 52-bit mantissa, which means that every integer with absolute value of at most 2^53 can be represented exactly, and 10^15 is less than 2^53.
Dec 20, 2018 at 18:27 answer timeline score: 5
Dec 10, 2018 at 19:45 review
Dec 14, 2018 at 0:00
Sep 30, 2018 at 4:05 review
Oct 4, 2018 at 0:00
Aug 8, 2018 at 8:47 answer timeline score: 7
Aug 7, 2018 at 9:34 answer timeline score: 44
Jun 20, 2018 at 15:00 comment @JerryJeremiah most computer languages define the semantics of their typing model very precisely and try to limit any room for interpretation. Let's say JavaScript for instance. In JavaScript, the number type is represented as 64-bit IEEE 754 standard. 64 bits. Not 80 bits. It doesn't matter to anyone who's implementing JavaScript, whether the microprocessor is capable to handle wider floating-point types. What matters is the wanted type, which is 64 bits. The same reasoning can usually be followed for all types of all technologies you're using. If you want 80-bit FP, you need a native ext.
Jun 19, 2018 at 22:07 comment This isn't an answer but I don't see any other answers that say this. x86 uses extended precision for floating point and then round it to 64 bits for a double. That means the answer should be more precise because of less intermediate rounding en.wikipedia.org/wiki/…
May 26, 2018 at 11:59 history edited
fine tuning
May 24, 2018 at 10:30 review
May 24, 2018 at 11:50
May 7, 2018 at 4:57 comment Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.
Dec 22, 2017 at 16:39 answer timeline score: 4
Dec 19, 2017 at 22:37 answer timeline score: 7
Nov 6, 2017 at 6:16 history rollback
Rollback to Revision 13
Nov 6, 2017 at 6:12 history edited
edited tags
Jan 7, 2017 at 16:06 history edited
Copy edited.
Dec 29, 2016 at 10:29 answer timeline score: 10
May 1, 2016 at 21:40 review
May 5, 2016 at 0:09
Mar 18, 2016 at 0:38 answer user1641172 timeline score: 17
Mar 16, 2016 at 5:27 answer timeline score: 65
Feb 2, 2016 at 23:49 answer timeline score: 33
Dec 21, 2015 at 11:15 answer timeline score: 17
Oct 5, 2015 at 15:55 answer timeline score: 10
Aug 21, 2015 at 14:53 answer timeline score: 19
Feb 23, 2015 at 17:15 answer timeline score: 141
Jan 3, 2015 at 12:12 answer timeline score: 31
Dec 1, 2014 at 10:47 history edited
Added JavaScript formatting to make it look prettier.
Nov 20, 2014 at 2:39 answer timeline score: 359
Oct 5, 2014 at 18:39 answer timeline score: 32
Oct 2, 2014 at 16:39 history edited
edited tags
Feb 20, 2014 at 1:20 history edited
edited tags
Feb 19, 2014 at 22:32 history edited
Updated the title to be more language agnostic, since these answers are great for other languages, too.
Oct 14, 2013 at 16:45 answer timeline score: 18
Apr 18, 2013 at 19:32 history edited
deleted 29 characters in body
Apr 18, 2013 at 11:52 answer timeline score: 676
Dec 7, 2012 at 6:24 history edited
edited title
Aug 20, 2012 at 16:39 history protected
Aug 1, 2012 at 7:02 answer timeline score: 18
Dec 26, 2011 at 6:51 answer timeline score: 35
Aug 10, 2010 at 6:32 history edited
Aug 1, 2010 at 23:26 comment @Gary True, although you are guaranteed to have perfect integer precision for integers up to 15 digits, see hunlock.com/blogs/The_Complete_Javascript_Number_Reference
Aug 1, 2010 at 23:00 history edited
code formatting
Apr 11, 2010 at 13:01 comment Just for information, ALL numeric types in javascript are IEEE-754 Doubles.
Apr 9, 2010 at 12:25 answer timeline score: 137
Apr 9, 2010 at 12:20 history edited
edited tags
Feb 25, 2009 at 21:53 history edited
edited tags
Feb 25, 2009 at 21:47 history edited
Spelling, formatting
Feb 25, 2009 at 21:43 answer timeline score: 580
Feb 25, 2009 at 21:42 answer timeline score: 51
Feb 25, 2009 at 21:42 comment JavaScript treats decimals as floating point numbers, which means operations like addition might be subject to rounding error. You might want to take a look at this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Feb 25, 2009 at 21:41 comment Floating point variables typically have this behaviour. It's caused by how they are stored in hardware. For more info check out the Wikipedia article on floating point numbers.
Feb 25, 2009 at 21:41 answer timeline score: 227
Feb 25, 2009 at 21:40 answer timeline score: 2695
Feb 25, 2009 at 21:39 history asked