from Hacker News

The Man Who Tried to Redeem the World with Logic

by robgering on 2/5/15, 3:38 PM with 23 comments

  • by javajosh on 2/5/15, 10:19 PM

    Having read _Principia_ did Pitts read Kurt Goedel[1]? I would very much like to know what he thought of it!

    What an incredibly sad story - burning years of work in meloncholy, all thanks to the lies of an angry woman. Wiener, though, shares a great deal of blame - a man should not pass judgement in on his friends without inquiring into the truth of the matter.

    [1]https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

  • by carapace on 2/6/15, 1:08 AM

    Such a terrible tragedy, both personal and for the world. I highly recommend the book referenced: "Dark Hero of the Information Age: In Search of Norbert Wiener, the Father of Cybernetics".

    R.I.P. Walter Pitts

    (One thing that troubles me slightly: this article mentions a possible cause of the break between Wiener and the others which is presented as speculation in the book, if I remember correctly, but stated here as a bald fact. In any event I wish that Wiener hadn't acted so rashly.)

  • by xmonkee on 2/6/15, 4:49 AM

    This article is fascinating. I had no idea that the Von Neumann machine was an extrapolation of a mental model. Even more fascinating, that the existince of a symbolic computation machine made the possibility of purely symobolic epistimology impossible. It's like they abstracted a lever higher than they wanted to and then made a cleaner, simpler implementation of it. And now with modern AI we are doubling down on that implementation and trying to build a new kind of intelligence no top of it.
  • by islon on 2/6/15, 3:43 PM

    "But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again." Assuming that 80% of humanity lives on less than $10 a day, and are, therefore, poor, I can just imagine the number of geniuses born poor that will not ever be able to show their geniality to the world.

    Imagine how the world would be a better place with all these people working as scientists, philosophers, mathematicians, etc.

  • by tripzilch on 2/9/15, 12:00 PM

    > Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend.

    Regarding "Nature had chosen ...", I wonder if this was actually how Pitts saw it (he seemed more clever than that), or whether it is the article's author's misconception that he considered there is in fact something in Nature that "chooses", instead of applying mechanistic rules entirely.

    It is as if the part of the story about the frogs is meant to show that Nature has a "spirit" after all, that evaded being captured in logic. I can't really fathom why Pitts, after all his history, would come to that conclusion. Just because the retina turned out to possess a certain amount of analog computing power?

  • by axilmar on 2/9/15, 5:01 PM

    The brain doesn't do logic, it does pattern matching, and selects the appropriate reaction based on the match that offers the biggest chances of survival.
  • by gotrecruit on 2/6/15, 6:38 AM

    So basically Margaret Weisner was the Yoko of the scientific world?
  • by Animats on 2/6/15, 6:09 AM

    I thought that article was going to be about Leibniz, and his "let us calculate" approach to decision making.

    "What the Frog's Eye Tells the Frog's Brain" (http://neuromajor.ucr.edu/courses/WhatTheFrogsEyeTellsTheFro...) is still worth reading. It's the first paper on what is now called "early vision".

    I'm painfully familiar with that world view. I went through Stanford CS in 1983-1985, when logic-based AI was, in retrospect, having its last gasp. I took "Dr. John's Mystery Hour", Epistemological Problems in Artificial Intelligence, from John McCarthy. The logicians were making progress on solving problems once they'd been hammered into just the right predicate calculus form, but were getting nowhere in translating the real world into predicate calculus.

    For computer program verification, though, that stuff works. For a time, I was fascinated by Boyer-Moore theory and their theorem prover. They'd redone Russell and Whitehead with machine proofs. Constructive mathematics maps well to what computers can do. I got the Boyer-Moore theorem prover (powerful, could do induction, but slow) hooked up to the Oppen-Nelson theorem prover (limited, only does arithmetic up to multiplication by constants, but fast) and used the combination to build a usable proof-of-correctness system for a dialect of Pascal. It worked fine; I used to invite people to put in a bug in a working program and watch the system find it.

    But it was clear that approach wasn't going to map to the messiness of the real world. Working on proof of correctness for real programs made it painfully clear how brittle formal logic systems are. Nobody was going to get to common sense that way. The logicians were in denial about this for a long time, which resulted in the "AI winter" from 1985 to 2000 or so.

    Then came the machine learning guys, and progress resumed. Science progresses one funeral at a time.

  • by nanis on 2/5/15, 10:32 PM

    I thought that was Spinoza.
  • by javert on 2/6/15, 3:05 AM

    Ayn Rand?