from Hacker News

0.999...= 1

by yurisagalov on 4/28/20, 6:31 AM with 626 comments

  • by undecisive on 4/28/20, 2:07 PM

    There is no proof that will ever satisfy a person dead-set against this. Ever since I brought this home from school as a child, my whole family ribbed me mercilessly for it.

    If you tell a person that 3/6 = 1/2, they'll believe you - because they have been taught from an early age that fractions can have multiple "representations" for the same underlying amount.

    People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct. The bar or dot or ... are there to plug a gap, allowing more values to be represented accurately than plain-old decimal numbers allow for. It has the side effect of introducing multiple representations - and even with this limitation, it doesn't cover everything - Pi can't be represented with an accurate number, for example.

    But it also exposes a limitation in humans: We cannot imagine infinity. Some of us can abstract it away in useful ways, but for the rest of the world everything has an end.

    I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?

  • by knzhou on 4/28/20, 8:10 AM

    Personally I've always thought "proofs" using "arithmetic" are right, but kind of stated backwards.

    The point is that in elementary school arithmetic, you define addition, multiplication, subtraction, division, decimals, and equality, but you never define "...". Until you've defined "...", it's just a meaningless sequence of marks on paper. You can't prove anything about it using arithmetic, or otherwise.

    What the "arithmetic proofs" are really showing that if we want "..." to have certain extremely reasonable properties, then we must choose to define it in such a way that 0.999... = 1. Other definitions would be possible (for example, a stupid definition would be 0.999... = 42), just not useful.

    What probably causes the flame wars over "..." is that most people never see how "..." is defined (which properly would require constructing the reals). They only see these indirect arguments about how "..." should be defined, which look unsatisfying. Or they grow so accustomed to writing down "..." in school that they think they already know how it's defined, when it never has been!

  • by dwheeler on 4/28/20, 2:35 PM

    A formally rigorous proof of this (in Metamath) is here:

    http://us.metamath.org/mpeuni/0.999....html

    Unlike typical math proofs, which hint at the underlying steps, every step in this proof only uses precisely an axiom or previously-proven theorem, and you can click on the step to see it. The same is true for all the other theorems. In the end it only depends on predicate logic and ZFC set theory. All the proofs have been verified by 5 different verifiers, written by 5 different people in 5 different programming languages.

    You can't make people believe, but you can provide very strong evidence.

  • by jl2718 on 4/28/20, 5:24 PM

    The proof relies on the assertion that the supremum of an increasing sequence is equal to the limit. This is mathematical dogma, and should be introduced as such. Once that is accepted, it becomes obvious.

    This is illustrative of what I see as a fundamental problem in mathematics education: nobody ever teaches the rules. In this case, the rules of simple arithmetic hit a dead end for mathematicians, so they invented a new rule that allowed them to go further without breaking any old rules. This is generally acceptable in proofs, although it can have significant implications, such as two mutually exclusive but otherwise acceptable rules causing a divergence in fields of study.

    When I was taught this, it was like, “Look how smart I am for applying this obtusely-stated limit rule that you were never told about.” This is how you keep people out of math. The point of teaching it is to make it easy, not hard.

  • by ginko on 4/28/20, 7:33 AM

    I remember being doubtful when being presented with this in middle school, but after being shown this as fractions makes it obvious:

          1/3 =     0.333..
      3 * 1/3 = 3 * 0.333..
          3/3 =     0.999..
            1 =     0.999..
  • by ping_pong on 4/28/20, 2:43 PM

    My 5 year old stumped me with this, and I had to look it up. He asked me why 1/3 + 1/3 + 1/3 = 1, since it's equal to 0.333... + 0.333... + 0.333... which is 0.999... How can that possibly equal 1.000...? And is 0.66... equal to 0.67000...?

    I didn't have a good enough answer for him, so I had to look it up and found this page. I tried to explain it to him but since I'm a terrible teacher and he's only 5, it was hard for me to convince him. Luckily he has many years before it matters!

  • by klodolph on 4/28/20, 7:22 AM

    An interesting consequence of this in proofs.

    You’ll see various proofs involving real numbers that must account for the fact that 0.999…=1.0. There are, of course, many different ways to construct real numbers, and often it’s very convenient to construct them as infinite sequences of digits after the decimal. For example, this construction makes the diagonalization argument easier. However, you must take care in your diagonalization argument not to construct a different decimal representation of a number already in your list!

  • by bytedude on 4/28/20, 7:19 AM

    Flame wars over this used to be common on the internet. People intuitively have the notion that the left side approaches 1, but never actually equals it. They see it as a process instead of a fixed value. Maybe the notation is to blame.
  • by orthoxerox on 4/28/20, 7:20 AM

    I remember WarCraft 3 official forums being torn apart by this, with probably thousands of comments in the thread. Blizzard even had to post their official stance on the issue, but that didn't calm those who insisted 0.999... was 1 minus epsilon and not exactly 1.
  • by steerablesafe on 4/28/20, 7:41 AM

    Maybe the major source of confusion is that our decimal representation for whole numbers is supposed to be unique. Then when we extend it to rationals and reals this property fails at rationals in the form of a/10^n.

    Arguably the sign symbol ruins it for whole numbers as well, as +0 and -0 could be equally valid representations of the number 0. We just conventionally don't allow -0 as a representation. There are other number representations that don't have this problem.

  • by heinrichhartman on 4/28/20, 4:24 PM

    0.9999 = 1 is a consequence of the way we define rational and real numbers and limits. There are alternative definitions of numbers where this equality does not hold: Non Standard Analysis https://en.wikipedia.org/wiki/Nonstandard_analysis being the most famous one.

    But for the sake of argument, let's just define numbers as sequences of digits with a mixed in period somewhere:

        MyNumber := {
          a = (a_1, a_2, ...) -- list of digits a_i = 0 .. 9; a_1 != 0.
          e -- exponent (integer)
          s -- sign (+/- 1)
        }
    
    Each such sequence corresponds to the (classical) real number: s * \sum_i a_i * 10^{i + e}.

    We can go on and define addition, subtraction, multiplication and division in the familiar way.

    Problems arise only when we try to establish desireable properties, e.g.

    (1/3) * 3 = 1

    Does NOT hold here, since 0.9999... is a difference sequence than 1.000....

    So yes, you can define these number systems, and you will have 0.999... != 1. But working with them will be pretty awkward, since a lot of familiar arithmetic breaks down.

  • by ltbarcly3 on 4/28/20, 7:30 AM

    This is 'more intuitive' if you think about it this way:

    If any two real numbers are not equal, then you can take the average and get a third number that is half way between them. Conversely, if the average of two numbers is equal to either of the numbers, then the two numbers are equal. (this isn't a proof, just a way to convince yourself of this)

    What's the average of .9999... and 1?

  • by sleepyams on 4/28/20, 5:54 PM

    There is a nice characterization of decimal expansions in terms of paths on a graph:

    Let C be the countable product of the set with ten elements, i.e. {0, 1, 2, ..., 9}. The space C naturally has the topology of a Cantor set (compact, totally disconnected, etc). Furthermore, for example, in this space the tuples (1, 9, 9, 9, ...) and (2, 0, 0, 0, ...) are distinct elements.

    The space C can also be described in terms of a directed graph, where there is a single root with ten outward directed edges, and each child node then has ten outward directed edges, etc. C can be thought of as the space of infinite paths on this graph.

    A continuous and surjective map from C to the unit interval [0, 1] can be constructed from a measure on these paths. For any suitable measure, this map is finite-to-one, meaning at most finitely many elements of C are mapped to a single element in the interval. For example there is a map which sends (1, 9, 9, ...) and (2, 0, 0,....) to the element "0.2".

    The point is that all decimal expansions of elements of [0, 1] can be described like this, and we can instead think of the unit interval not as being composed of numbers _instrinsically_, but more like some kind of mathematical object that _admits_ decimal expansions. The unit interval itself can be described in other ways mathematically, and is not necessarily tied to being represented as real numbers. Hope this helps someone!

  • by cjfd on 4/28/20, 7:29 AM

    Ultimately this is more the definition of R than that it is a theorem. One can also work with sets of numbers in which the completeness axiom does not hold. E.g., sets of numbers in which one also has infinitesimals.
  • by calibas on 4/28/20, 4:00 PM

    And this is why I prefer hyperreals.

    0.999... = 1 - 1/∞

    We talk about infinity all the time in mathematics, teachers use the concept to introduce calculus in a way that people can more easily understand, but using infinity directly is almost universally banned within classrooms.

    Nonstandard analysis is a much more intuitive way of understanding calculus, it's the whole "infinite number of infinitely small pieces" concept, but you're allowed to write it down too.

  • by russellbeattie on 4/28/20, 8:05 AM

    I'll just chime in with my completely ignorant theory that 1 - 0.999... = the infinitely smallest number, but is still, in my mind, regardless of any logic, reason, or educated calculations, greater than 0.

    I understand and accept this is wrong. However, somewhere in my brain I still believe it. Sort of like +0 and -0, which are also different in my head.

  • by JJMcJ on 4/28/20, 5:20 PM

    Usually the concept of a limit, which assigns a meaning to 0.999..., isn't studied until calculus.

    There are approaches to mathematics that avoid infinite constructions, and a "strict finitist" would not assign 0.999... a meaning.

    The stunning success of limit based mathematics makes finitism a fringe philosophy.

    Remember, class, for every epsilon there is a delta.

  • by traderjane on 4/28/20, 8:03 AM

    Professor N.J. Wildberger is probably among the most well known "ultrafinitist" on YouTube.

    https://www.youtube.com/watch?v=WabHm1QWVCA

    I mention him because I would think he sympathizes with those who have concern over the meaning of this kind of notation.

  • by sv_h1b on 4/29/20, 8:03 PM

    0.999...=1 is true in the mathematical sense, period.

    However as a representation of physical world, there is a caveat. What we understand is physical world appears and behaves discretely, because at planck scale (approx. 10^-35) the distances seem to behave discretely.

    Although common people don't know/ understand planck scale, they do grasp this concept intuitively. What they are really saying is that in physical world there's some small interval (more precisely, about[1 - 10^-35, 1]) which can't be subdivided further, based on our current knowledge.

    Same thing applies to planck time (approx. 5 * 10^-43) too.

    So people are arguing two different things - the pure maths concept, or the real world interpretation.

  • by sebringj on 4/28/20, 3:29 PM

    The thing that helps me "understand" it is that the universe has finite sizes of things like the Planck length for example being a theoretical thing at the smallest distance I would imagine. Now imagine it going smaller than the Planck length (finite) in terms of the difference of .9 repeating and 1 since infinitely small differences can do that. Essentially there is no way to tell the difference between .9 repeating and 1 then from a practical or theoretical perspective of measurement. So not imagining infinity lets us at least imagine smaller than the smallest measurable thing.
  • by gigatexal on 4/28/20, 7:41 PM

    I hate to say it but I still don't believe this, it just goes against all intuition that I have, but people much smarter than I have proven it so I take it on faith for doing things like calculus etc just my lizard brain won't let me accept something that looks like less than 1 being 1 the same way that the limit of 1/x as x goes to infinity is zero but it doesn't seem like ti should be. The number gets infinitesimally small but it's still some non-zero number -- I dunno this is probably proving my ignorance it's just what it is.
  • by edanm on 4/28/20, 7:58 PM

    I think if you're trying to "prove" this using axioms, you've already lost.

    The problem isn't that you can't come up with axioms to convince people you have a proof - the problem is with people not understanding that 0.99999.... is not a number - it's one representation of an abstract entity called a number.

    The problem is, the maths required to actually define the concept of a number is fairly complicated, so it's hard to explain to someone why all of these axioms make sense in the first place.

  • by fluganator on 4/28/20, 8:53 PM

    Can someone help me out here with least upper bounds?

    Generally the proofs of .9...=1 rely on the fact there is no number that exists that can be between .9.. and 1 and therefore .9... is equal to 1.

    .9... is the least upper bounds of the set. My question is if .9... was removed from the set what would be the new least upper bounds. Another way of asking the question is if we define it in this context doesn't any set bounded by a real number have a least upper bounds and aren't all real numbers equal to each other?

    Thanks!

  • by jdashg on 4/28/20, 10:58 PM

    I think this is a notation and definition problem. To me, it behaves differently in `Y = 1 / X`, which distinguishes quite strongly between `X = 1 - 0.9999` and `X = 0.9999 - 1`! If 0.9999 ought be exactly equivalent to 1.0, there ought to be no difference between `1 / (1 - 0.9999)` and `1 / (0.9999 - 1)`.

    To me, 0.9999 indicates a directional limit, which can't necessarily be evaluated and substituted separately from its context.

  • by rs23296008n1 on 4/28/20, 7:29 AM

    I'm actually curious what impact it would have on various proofs if 0.999... wasn't accepted as 1.

    What gets broken? What consequences do we hit?

  • by j-pb on 4/28/20, 7:48 AM

    I'm not a mathematician, but I would guess that the Surreal numbers developed by John Conway, do contain values that start with an infinite sequence of 0.9_ but "end" with something that makes them != 1.

    https://en.m.wikipedia.org/wiki/Surreal_number

  • by jefftk on 4/28/20, 2:33 PM

    What if you have 0.9̅4? Can we say 0.9̅5 > 0.9̅4 > 0.9̅3? More on what happens if you allow this: https://mathwithbaddrawings.com/2013/08/13/the-kaufman-decim...
  • by shrimpx on 4/28/20, 4:36 PM

    A dumb consequence of the axiom of choice? The reals are like a membrane with no atomic pieces. You can move in either direction infinitely and you can zoom in infinitely without reaching any “Planck unit” so to speak. So what does it even mean to pick out a “real number”? To me anything built on this concept is nonsense.
  • by vfinn on 4/28/20, 8:19 AM

    Sorry for my naivety, but why one couldn't prove by induction that adding 9s never close the gap, or let's say, that by definition the operation is such, that it never closes the gap. If you can always halve the pie, then you can continue eating forever. To me it would be much easier to accept that (1/3)*3 is not 1.
  • by zests on 4/28/20, 3:39 PM

    Is 1 a prime number? No, because we define it not to be. Why do we define it not to be a prime number? That's the real question.

    Is 0.999... = 1? Yes, because we define decimal numbers to behave that way. Why do we define them to behave that way? That's the real question.

  • by fourseventy on 4/28/20, 5:27 PM

    The best way to think about it is that 1 - 0.999... = 0.000...

    The result of 1 minus 0.999... is 0.000 with zeroes that go to infinity. And I think its easier to reason that 0.000 with repeating zeroes forever is in fact equal to zero.

  • by j7ake on 4/28/20, 8:01 AM

    This can be solved by using base 12 rather than base 10 to do the calculation...
  • by 2OEH8eoCRo0 on 4/28/20, 3:22 PM

    Yes it does but it seems like a less correct way of writing it. Like you could represent the number 10 as 10/1 (ten over one) but why would you? Why would you represent 1 as .9 repeated?
  • by ttonkytonk on 4/28/20, 2:43 PM

    Basically an infinite series of 9's just means they're all maxed so the .999... = 1 makes sense to me (kind of anyway).
  • by juanmacuevas on 4/28/20, 6:44 PM

    # on Python (3.7.4) 1 == 0.99999999999999994448884876874217297882 # but 1 != 0.99999999999999994448884876874217297881
  • by berkeleynerd on 4/28/20, 8:08 AM

    Perhaps the natural discomfort many face when confronted by this challenging formulation instead indicates that a limitation of the real number system has been perceived? I would encourage those who have this reaction to study hyperreal and other alternative systems as mentioned in the article. If this clicks for them they may help lead us in new direction mathematically and advance the state of the art.
  • by alpple on 4/28/20, 4:34 PM

    Why do we accept .999... as a valid notation. Why not only allow 1 to denote this concept?
  • by flerchin on 4/28/20, 5:05 PM

    I'd have figured "approaches 1 from the left" would be more accurate.
  • by heavenlyblue on 4/28/20, 4:17 PM

    So is then ‘0.(0...)1 = 0’?
  • by novacole on 4/28/20, 5:37 PM

    So 99.999..% of the speed of light is just the speed of light?
  • by clevbrown on 5/1/20, 10:20 AM

    Given infinity that doesn’t make sense.
  • by grensley on 4/28/20, 5:41 PM

    What is the largest number smaller than 1?
  • by amelius on 4/28/20, 2:20 PM

    How about the expression:

        0.9999... < 1
    
    And consider that if a < b then a != b.
  • by seyz on 4/28/20, 2:35 PM

    "In other words, "0.999..." and "1" represent the same number." - Ok, I'm done with this world.
  • by ristos on 4/28/20, 2:04 PM

    I don't think that 0.999... = 1 is actually provable. I think this and all of calculus is actually axiomatic, which has the following axiom:

    Given ε = 1/∞ then: ε = 0

    Am I wrong in thinking this way? It seems as though there's no way to actually truly prove that an infinite series converging towards zero actually hits zero (from a constructivist pov)

  • by smlckz on 4/28/20, 2:06 PM

    Reminds me of the paradoxes of Zeno [1], especially the paradox of Achilles and the tortoise.

    At least one can simply prove that 0.999... = 1 without much hard work. Maybe less controversial than the following:

        1 + 2 + 3 + ... [somehow] = -1/12 {{Riemann's zeta(-1)?}}
        1 + 2 + 4 + 8 + 16 + ... [somehow] = -1
    
    As well as the weird prime product (Product of 1/(1-(p^-2)) for p prime) and the sum of x^-2 from x=1 to [sigh] being equal to (pi^2)/6 are some example of infinite beauty of mathematics that I remember.

    [1]: https://en.wikipedia.org/wiki/Zeno's_paradoxes

  • by upofadown on 4/28/20, 1:57 PM

    > ...infinitely many 9s...

    How about we prove that an infinite number of 9s is impossible?

    Assume that we have a finite number of 9s. Add a 9. The result is not infinite. Add another 9. The result is still not infinite. We can repeat this process for an infinite amount of time and still not have an infinite number of nines.

    Any process that can not be completed in a finite amount of time can not complete and can not have a valid result based on that completion. Any process that can not be completed in an infinite amount of time is also bogus, but is in a sense even more bogus.

    Added: Note that this is different than the case where we are asked to contemplate infinity with respect to continuous functions. By defining the number of 9s as a discrete (integer) value it opens things up to a discrete argument. These pointless navel gazing exercises always end up as a war of what everyone things things are defined as.