by verbify on 12/12/24, 2:30 PM with 210 comments
by boothby on 12/12/24, 4:30 PM
Well, I never was much of a number theorist. I never did come to understand the basic definitions behind the BSD conjecture. Number theory is so old, so deep, that writing a PhD on the topic is the step one takes to become a novice. Where I say that I didn't understand the definitions, I certainly knew them and understood the notation. But there's a depth of intuition that I never arrived at. So the uproar of experts, angry that I had the audacity to hope for a counterexample, left me more curious than shaken: what do they see, that they cannot yet put words to?
I am delighted by these advances in formalism. It makes the field feel infinitely more approachable, as I was a programmer long before I called myself a mathematician, and programming is still my "native tongue." To the engineers despairing at this story, take it from me: this article shows that our anxiety at the perceived lack of formalism is justified, but we must remember that anxiety is a feeling -- and the proper response to that feeling is curiosity, not avoidance.
by moomin on 12/12/24, 5:07 PM
All of which is a long way of saying the line "the old-fashioned 1990s proof" makes me feel _really_ old.
by munchler on 12/12/24, 3:08 PM
Tongue-in-cheek irritation aside, this is actually awesome. I think Lean (and other theorem provers) are going to end up being important tools in math going forward.
by seanwilson on 12/12/24, 3:26 PM
It makes me think about how you get UI/UX/web designers who make (informal, imprecise) mockups and prototypes of their design ideas and interaction flows, they then hand these off to a developer to do the coding (formalise it, explain it precisely to the machine), and on the way the developer will inevitably find problems/holes in the designs (like an interaction scenario or code path not considered in the design, which could uncover a major design flaw), which the developer and/or the designer will have to patch somehow.
As in, design and development are different roles and require different mindsets, and most designers have a very strong resistance against working and thinking like a developer.
by lincolnq on 12/12/24, 3:49 PM
Example: https://github.com/ImperialCollegeLondon/FLT/blob/main/FLT/M...
Also check out the blueprint, which describes the overall structure of the code:
https://imperialcollegelondon.github.io/FLT/blueprint/
I'm very much an outside observer, but it is super interesting to see what Lean code looks like and how people contribute to it. Great thing is that there's no need for unittests (or, in some sense, the final proof statement is the unittest) :P
by vouaobrasil on 12/12/24, 4:18 PM
Past researcher in pure math here. The big problem is that mathematicians are notorious for not providing self-contained proofs of anything because there is no incentive to do so and authors sometimes even seem proud to "skip the details". What actually ends up happening is that if you want a rigorous proof that can be followed theoretically by every logical step, you actually need an expert to fill in a bunch of gaps that simply can't easily be found in the literature. It's only when such a person writes a book explaining everything that it might be possible, and sometimes not even then.
The truth is, a lot of modern math is on shaky ground when it comes to stuff written down.
by modeless on 12/12/24, 4:58 PM
I've always wondered if this intuition was really true. Would it really be so impossible for a whole branch of mathematics to be developed based on a flawed proof and turn out to be simply false?
by Tainnor on 12/13/24, 1:01 PM
Many of my troubles probably come from the fact that I only have a BSc in maths and that I'm not very familiar with Lean/mathlib and don't have anyone guiding me (although I did ask some questions in the very helpful Zulip community). Many results in mathlib are stated in rather abstract ways and it can be hard to figure out how they relate to certain standard undergraduate theorems - or whether those are in mathlib at all. This certainly makes sense for the research maths community, but it was definitely a stumbling block for me (and probably would be one if Lean were used more in teaching - but this is something that could be sorted out given more time).
In terms of proof automation, I believe we're not there yet. There are too many things that are absolutely harder to prove than they should be (although I'm sure that there's also a lot of tricks that I'm just not aware of). My biggest gripe concerns casts, in "regular" mathematics, the real numbers are a subset of the complex numbers and so things that are true for all complex numbers are automatically true for all reals[0], but in Lean they're different types with an injective map / cast operation and there is a lot of back-and-forth conversion that has to be done and muddies the essence of the proof, especially when you have "stacks" of casts, e.g. a natural number cast to a real cast to a complex number etc.
Of course, this is somewhat specific to the subject, I imagine that in other areas, e.g. algebra, dealing with explicit maps is much more natural.
[0] This is technically only true for sentences without existential quantifiers.
by graycat on 12/12/24, 4:30 PM
Okay, for some decades, I've read, written, taught, applied, and published, in total, quite a lot of math. Got a Ph.D. in applied math.
Yes, there are problems in writing math, that is, some math is poorly written.
But, some math is quite nicely written. (1) Of course, at least define every symbol before using it. (2) It helps to motivate some math before presenting it. (3) Sometimes intuitive statments can help.
For more, carefully reading some well-written math can help learning how to write math well:
Paul R.\ Halmos, {\it Finite-Dimensional Vector Spaces, Second Edition,\/} D.\ Van Nostrand Company, Inc., Princeton, New Jersey, 1958.\ \
R.\ Creighton Buck, {\it Advanced Calculus,\/} McGraw-Hill, New York, 1956.\ \
Tom M.\ Apostol, {\it Mathematical Analysis: Second Edition,\/} ISBN 0-201-00288-4, Addison-Wesley, Reading, Massachusetts, 1974.\ \
H.\ L.\ Royden, {\it Real Analysis: Second Edition,\/} Macmillan, New York, 1971.\ \
Walter Rudin, {\it Real and Complex Analysis,\/} ISBN 07-054232-5, McGraw-Hill, New York, 1966.\ \
Leo Breiman, {\it Probability,\/} ISBN 0-89871-296-3, SIAM, Philadelphia, 1992.\ \
Jacques Neveu, {\it Mathematical Foundations of the Calculus of Probability,\/} Holden-Day, San Francisco, 1965.\ \
by tunesmith on 12/12/24, 7:02 PM
By the way, I found an excellent word for when a proof is disproven or found to be faulty, but that is esoteric enough that it has less risk of being misinterpreted to mean the conclusion is proven false: 'vitiated'. The conclusion might still be true, it just needs a new or repaired proof; the initial proof is 'vitiated'. I like how the word sounds, too.
by ur-whale on 12/12/24, 4:03 PM
One of the many reasons I love math is the feeling I've always had that with it, we're building skyscraper-sized mental structures that are built on provably indestructible foundations.
With the advent of "modern" proofs, which involve large teams of Mathematicians who produce proof that are multiple hundreds of pages long and only understandable by a very, very tiny sliver of humanity (with the extreme case of Shinichi Mochizuki [1] where N=1), I honestly felt that math had lost its way.
Examples: graph coloring theorem, Fermat's last theorem, finite group classification theorem ... all of them with gigantic proofs, out of reach of most casual observers.
And lo and behold, someone found shaky stuff in a long proof, what a surprise.
But it looks like some Mathematicians feel the same way I do and have decided to do something about it by relying on the computers. Way to go guys !
by bee_rider on 12/12/24, 3:40 PM
Oh well, hopefully when they get the machines to solve math, they’ll still want them to run a couple percents faster every year.
by ykonstant on 12/12/24, 4:33 PM
Much worse, this nonchalant attitude is being taught to PhD students and postdocs both explicitly and implicitly: if you are worried too much, maybe you are not smart enough to understand the arguments/your mathematical sense is not good enough to perceive the essence of the work. If you explain too much, your readers will think you think they are dumb; erase this page from your paper (actual referee feedback).
Also, like Loeffler in the comments, I don't trust the "people have been using crystalline cohomology forever without trouble" argument. The basics are correct, yes, as far as I can tell (because I verified them myself, bearing in mind of course that I am very fallible).
But precisely because of that, large swathes of the theory will be correct. Errors will be rare and circumstantial, and that is part of the problem! It makes them very easy to creep into a line of work and go unnoticed for a long time, especially if the expert community of the area is tiny---as is the case in most sub-areas of research math.
by baruz on 12/12/24, 5:36 PM
by sam_goody on 12/12/24, 10:26 PM
I don't remember the details of the story (I read surely your joking years ago), and remember being amazed by how much time that policy must have cost him.
But now I wonder that he didn't hit dozens or hundreds of errors over the years.
by Retr0id on 12/12/24, 4:11 PM
by MrMcCall on 12/12/24, 8:41 PM
So I have wondered about PdF's possible thinking.
Now, the degenerate case of n=2 is just the Pythagorean Theorem
c^2 = a^2 + b^2
and we now know from AW's proof that the cubic and beyond fail.Now, the PT is successful because the square b^2 can be "squished" over the corner of a^2, leaving a perfect square c^2. [Let's let the a^n part be the bigger of the two.]
5^2 = 4^2 + 3^2
25 = 16 + 9
25 = 16 + (4 + 4 + 1)
Each of the 4's go on the sides, and the 1 goes on the corner, leaving a pure 5x5 square left over.Now, for cubes, we now know that a cube cannot be "squished" onto another cube's corner in such a way that makes a bigger cube.
I'm not up for breaking out my (diminishing) algebra right now (as it's a brain off day), but that b^3 cube would need to break down into three flat sides, three linear edges, and the corner.
This fits my physical intuition of the problem and seems to me to be a possible way that PdF might have visualized the cubic form. Now, I have zero mathematical intuition about such things (or necessary math skills either), but the physical nature of the problem, plus the fact that we now know that the cubic and beyond don't work, leads me to wonder if this is an interesting approach. It also makes me curious to know why it isn't, if that is indeed the case, (which I assume is probable).
As to the n=4 and beyond, I would just hand-wave them away and say something like, "Well, of course they are more fractally impossible", which by that I mean that a real mathematician would have to say exactly why the n=3 case failing means that the n>=4 case would also not work. (My guess here is that there just becomes less and less possibility of matching up higher-dimensional versions of a two-term addition.)
Anyway, just a thought I had some years ago. I even built a lego model of the bits of cube spreading out along the corner of the a^3 cube.
I would enjoy learning why I'm so utterly wrong about this, but I found it a fun mental exercise some years ago that's been swimming around in my brain.
Thanks in advance if there are any takers around here. :-)
[And, my favorite Horizon episode is 'Freak Wave'.]
by jebarker on 12/12/24, 10:55 PM
by charlieyu1 on 12/12/24, 11:00 PM
by naasking on 12/13/24, 2:23 PM
by le-mark on 12/14/24, 8:22 PM
by ThomasBb on 12/14/24, 8:15 PM
by pennomi on 12/12/24, 3:08 PM
by levn11 on 12/13/24, 2:15 AM
by euroderf on 12/14/24, 1:22 PM