Timeline for Is floating point math broken?

Current License: CC BY-SA 4.0

75 events
when toggle format what by license comment
Mar 23 at 11:26 comment added MagicLuckyCat I was just looking for a quick workaround to sum up a field in json array, so I thought it might help someone else with an example: stackblitz.com/edit/typescript-hb2dpm
Dec 12, 2021 at 8:15 comment added Oskar Limka The curse of decimal education: why don't you try 0.125+0.25 or 0.0625+0.125 ? (I'm a metric system [hence decimal] user, but if I had a choice, I would use the metric units with the dyadic system that is [partially] used in imperial units: like 1/2, 1/4, 1/8, etc.) Not only the arithmetic is easier, but it's easier to visualise the outcome of halving/doubling than that of dividing/multiplying by 10.
Sep 16, 2021 at 0:28 answer plugwash timeline score: 7
Jan 23, 2021 at 7:17 comment added Hampus Wrote a blog post that explores this: Floating Point Basics. Shorter than the otherwise great "What every computer scientist should know about floating-point arithmetic" already mentioned.
Aug 20, 2020 at 15:38 answer RollerSimmer timeline score: 4
Aug 3, 2020 at 15:03 answer Quantum Sushi timeline score: 7
Jan 7, 2020 at 19:14 comment added ikegami Simple explanation: 1/10 is periodic in binary (0.0 0011 0011 0011...) just like 1/3 is periodic in decimal (0.333...), so 1/10 can't be accurately represented by a floating point number.
Dec 25, 2019 at 19:42 comment added Jesper Juhl What Every Computer Scientist Should Know About Floating-Point Arithmetic
S Nov 4, 2019 at 13:05 history bounty ended adiga
S Nov 4, 2019 at 13:05 history notice removed adiga
S Nov 3, 2019 at 12:19 history bounty started adiga
S Nov 3, 2019 at 12:19 history notice added adiga Reward existing answer
Oct 5, 2019 at 21:46 answer costargc timeline score: 2
Sep 27, 2019 at 18:30 comment added Jonathan Leffler A recent (September 2019) article about Numbers limit how accurately digital computers model chaos.
May 8, 2019 at 18:43 comment added RonJohn @BobJarvis don't be disingenuous: COBOL certainly has the ability to add decimal numbers.
May 8, 2019 at 18:37 comment added Bob Jarvis - Слава Україні @RonJohn: in that case, one can generally say about any language that as long as you don't use a feature that has special considerations then you don't have to consider them. :-)
May 7, 2019 at 19:45 answer Vlad Agurets timeline score: 4
Apr 29, 2019 at 14:53 history rollback NathanOliver
Rollback to Revision 16
Apr 25, 2019 at 20:35 history edited isherwood CC BY-SA 4.0
added 31 characters in body
Apr 25, 2019 at 19:39 history edited isherwood CC BY-SA 4.0
edited title
Apr 22, 2019 at 1:02 answer chqrlie timeline score: 5
Feb 21, 2019 at 14:53 comment added Arne Vogel @Ender This is not a contradiction, IEEE 754 doubles have a 52-bit mantissa, which means that every integer with absolute value of at most 2^53 can be represented exactly, and 10^15 is less than 2^53.
Dec 20, 2018 at 18:27 answer Daniel McLaury timeline score: 5
Dec 10, 2018 at 19:45 review Close votes
Dec 14, 2018 at 0:00
Sep 30, 2018 at 4:05 review Close votes
Oct 4, 2018 at 0:00
Aug 8, 2018 at 8:47 answer nauer timeline score: 7
Aug 7, 2018 at 9:34 answer Muhammad Musavi timeline score: 44
Jun 20, 2018 at 15:00 comment added kumesana @JerryJeremiah most computer languages define the semantics of their typing model very precisely and try to limit any room for interpretation. Let's say JavaScript for instance. In JavaScript, the number type is represented as 64-bit IEEE 754 standard. 64 bits. Not 80 bits. It doesn't matter to anyone who's implementing JavaScript, whether the microprocessor is capable to handle wider floating-point types. What matters is the wanted type, which is 64 bits. The same reasoning can usually be followed for all types of all technologies you're using. If you want 80-bit FP, you need a native ext.
Jun 19, 2018 at 22:07 comment added Jerry Jeremiah This isn't an answer but I don't see any other answers that say this. x86 uses extended precision for floating point and then round it to 64 bits for a double. That means the answer should be more precise because of less intermediate rounding en.wikipedia.org/wiki/…
May 26, 2018 at 11:59 history edited Rann Lifshitz CC BY-SA 4.0
fine tuning
May 24, 2018 at 10:30 review Suggested edits
May 24, 2018 at 11:50
May 7, 2018 at 4:57 comment added Pardeep Jain Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.
Dec 22, 2017 at 16:39 answer Piedone timeline score: 4
Dec 19, 2017 at 22:37 answer Torsten Becker timeline score: 7
Nov 6, 2017 at 6:16 history rollback Stargateur
Rollback to Revision 13
Nov 6, 2017 at 6:12 history edited alinsoar
edited tags
Jan 7, 2017 at 16:06 history edited Peter Mortensen CC BY-SA 3.0
Copy edited.
Dec 29, 2016 at 10:29 answer alinsoar timeline score: 10
May 1, 2016 at 21:40 review Close votes
May 5, 2016 at 0:09
Mar 18, 2016 at 0:38 answer user1641172 timeline score: 17
Mar 16, 2016 at 5:27 answer Mark Ransom timeline score: 65
Feb 2, 2016 at 23:49 answer DigitalRoss timeline score: 33
Dec 21, 2015 at 11:15 answer Patricia Shanahan timeline score: 17
Oct 5, 2015 at 15:55 answer Blair Houghton timeline score: 10
Aug 21, 2015 at 14:53 answer Andrea Corbellini timeline score: 19
Feb 23, 2015 at 17:15 answer Wai Ha Lee timeline score: 141
Jan 3, 2015 at 12:12 answer Kostas Chalkias timeline score: 31
Dec 1, 2014 at 10:47 history edited James Donnelly CC BY-SA 3.0
Added JavaScript formatting to make it look prettier.
Nov 20, 2014 at 2:39 answer Chris Jester-Young timeline score: 359
Oct 5, 2014 at 18:39 answer Konstantin Burlachenko timeline score: 32
Oct 2, 2014 at 16:39 history edited Deduplicator
edited tags
Feb 20, 2014 at 1:20 history edited Marcin
edited tags
Feb 19, 2014 at 22:32 history edited John Kugelman CC BY-SA 3.0
Updated the title to be more language agnostic, since these answers are great for other languages, too.
Oct 14, 2013 at 16:45 answer Piyush S528 timeline score: 18
Apr 18, 2013 at 19:32 history edited Colonel Panic CC BY-SA 3.0
deleted 29 characters in body
Apr 18, 2013 at 11:52 answer KernelPanik timeline score: 676
Dec 7, 2012 at 6:24 history edited Emil Vikström CC BY-SA 3.0
edited title
Aug 20, 2012 at 16:39 history protected Daniel A. White
Aug 1, 2012 at 7:02 answer workoverflow timeline score: 18
Dec 26, 2011 at 6:51 answer Justineo timeline score: 35
Aug 10, 2010 at 6:32 history edited Brock Adams
Added a more search-friendly tag.
Aug 1, 2010 at 23:26 comment added Ender @Gary True, although you are guaranteed to have perfect integer precision for integers up to 15 digits, see hunlock.com/blogs/The_Complete_Javascript_Number_Reference
Aug 1, 2010 at 23:00 history edited Daniel Vassallo CC BY-SA 2.5
code formatting
Apr 11, 2010 at 13:01 comment added Gary Willoughby Just for information, ALL numeric types in javascript are IEEE-754 Doubles.
Apr 9, 2010 at 12:25 answer Daniel Vassallo timeline score: 137
Apr 9, 2010 at 12:20 history edited Josh Lee
edited tags
Feb 25, 2009 at 21:53 history edited Brian R. Bondy
edited tags
Feb 25, 2009 at 21:47 history edited bdukes CC BY-SA 2.5
Spelling, formatting
Feb 25, 2009 at 21:43 answer Joel Coehoorn timeline score: 580
Feb 25, 2009 at 21:42 answer Brett Daniel timeline score: 51
Feb 25, 2009 at 21:42 comment added matt b JavaScript treats decimals as floating point numbers, which means operations like addition might be subject to rounding error. You might want to take a look at this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Feb 25, 2009 at 21:41 comment added Ben S Floating point variables typically have this behaviour. It's caused by how they are stored in hardware. For more info check out the Wikipedia article on floating point numbers.
Feb 25, 2009 at 21:41 answer Devin Jeanpierre timeline score: 227
Feb 25, 2009 at 21:40 answer Brian R. Bondy timeline score: 2695
Feb 25, 2009 at 21:39 history asked Cato Johnston CC BY-SA 2.5