by lemper on 10/22/24, 8:21 AM with 61 comments
by pvg on 10/22/24, 2:10 PM
by Tainnor on 10/22/24, 4:06 PM
Either I misunderstand the notation or there seems to be something missing there - the right hand side of that implication arrow is not a formula.
I would assume that what is meant is α⊂β→α∪(β−α)=β
by youoy on 10/22/24, 1:45 PM
> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.
> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.
Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.
by awanderingmind on 10/22/24, 2:56 PM
by Animats on 10/22/24, 8:14 PM
(ZERO)
and numbers are (ADD1 (ZERO))
(ADD1 (ADD1 (ZERO)))
etc. The prover really worked that way internally, as I found out when I input a theorem with numbers such as 65536 in it. I was working on proving some things about 16-bit machine arithmetic, and those big numbers pushed SRI International's DECSystem 2060 into thrashing.Here's the prover building up basic number theory, one theorem at a time.[1] This took about 45 minutes in 1981 and takes under a second now.
Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.
What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.
Fun times.
[1] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...
by jk4930 on 10/22/24, 10:26 PM
by cubefox on 10/22/24, 2:58 PM
Unrelated, but why doesn't Hacker News have support for latex? And markdown, for that matter?
by ngcc_hk on 10/23/24, 10:42 AM
The issue is 1+1 has no guarantee it will be two. You look carefully you can see the first 1 is exactly the same as the second 1 !!!!
Hence put the set of all Russell that do that kind of maths and add to another Russell also do that maths. You still ended up with one Russell.
That is why go all the trouble to say no intersection and first oneness set does not overlap with the second oneness set etc etc
Qed
by adrian_b on 10/22/24, 4:35 PM
While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.
The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.
The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.
The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.
The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.
However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.
by earthboundkid on 10/22/24, 6:32 PM
by singleshot_ on 10/24/24, 1:20 AM
A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.
I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.
by ngcc_hk on 10/23/24, 11:08 AM
by redbell on 10/22/24, 3:28 PM
For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.
by bazoom42 on 10/23/24, 1:40 PM
by yohannparis on 10/22/24, 1:51 PM
by anthk on 10/22/24, 10:21 PM
by dvh on 10/22/24, 1:35 PM
by wildermuthn on 10/22/24, 6:31 PM
Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.
I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.