by robgering on 2/5/15, 3:38 PM with 23 comments
by javajosh on 2/5/15, 10:19 PM
What an incredibly sad story - burning years of work in meloncholy, all thanks to the lies of an angry woman. Wiener, though, shares a great deal of blame - a man should not pass judgement in on his friends without inquiring into the truth of the matter.
[1]https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
by carapace on 2/6/15, 1:08 AM
R.I.P. Walter Pitts
(One thing that troubles me slightly: this article mentions a possible cause of the break between Wiener and the others which is presented as speculation in the book, if I remember correctly, but stated here as a bald fact. In any event I wish that Wiener hadn't acted so rashly.)
by xmonkee on 2/6/15, 4:49 AM
by islon on 2/6/15, 3:43 PM
Imagine how the world would be a better place with all these people working as scientists, philosophers, mathematicians, etc.
by tripzilch on 2/9/15, 12:00 PM
Regarding "Nature had chosen ...", I wonder if this was actually how Pitts saw it (he seemed more clever than that), or whether it is the article's author's misconception that he considered there is in fact something in Nature that "chooses", instead of applying mechanistic rules entirely.
It is as if the part of the story about the frogs is meant to show that Nature has a "spirit" after all, that evaded being captured in logic. I can't really fathom why Pitts, after all his history, would come to that conclusion. Just because the retina turned out to possess a certain amount of analog computing power?
by axilmar on 2/9/15, 5:01 PM
by gotrecruit on 2/6/15, 6:38 AM
by Animats on 2/6/15, 6:09 AM
"What the Frog's Eye Tells the Frog's Brain" (http://neuromajor.ucr.edu/courses/WhatTheFrogsEyeTellsTheFro...) is still worth reading. It's the first paper on what is now called "early vision".
I'm painfully familiar with that world view. I went through Stanford CS in 1983-1985, when logic-based AI was, in retrospect, having its last gasp. I took "Dr. John's Mystery Hour", Epistemological Problems in Artificial Intelligence, from John McCarthy. The logicians were making progress on solving problems once they'd been hammered into just the right predicate calculus form, but were getting nowhere in translating the real world into predicate calculus.
For computer program verification, though, that stuff works. For a time, I was fascinated by Boyer-Moore theory and their theorem prover. They'd redone Russell and Whitehead with machine proofs. Constructive mathematics maps well to what computers can do. I got the Boyer-Moore theorem prover (powerful, could do induction, but slow) hooked up to the Oppen-Nelson theorem prover (limited, only does arithmetic up to multiplication by constants, but fast) and used the combination to build a usable proof-of-correctness system for a dialect of Pascal. It worked fine; I used to invite people to put in a bug in a working program and watch the system find it.
But it was clear that approach wasn't going to map to the messiness of the real world. Working on proof of correctness for real programs made it painfully clear how brittle formal logic systems are. Nobody was going to get to common sense that way. The logicians were in denial about this for a long time, which resulted in the "AI winter" from 1985 to 2000 or so.
Then came the machine learning guys, and progress resumed. Science progresses one funeral at a time.
by nanis on 2/5/15, 10:32 PM
by javert on 2/6/15, 3:05 AM