by jrlocke on 2/12/24, 10:06 PM with 279 comments
by ggm on 2/12/24, 11:01 PM
What I notice more and more, is the "you're wrong" is used to buttress opinion masquerading as fact. If you preface "I think that.." to asserts it doesn't stop the "you're wrong" but it at least puts the discussion into the realms of conjecture about things, including facts, rather than simple asserts of facts which are often not as factual as they seem.
I also notice that argument by analogy is being over-used. Because you want to compare your large single CPU to a multi CPU doesn't mean it actually is a Bull compared to a herd of chickens. Or that cat-herding is actually much harder than it looks: you need the right kind of cream. Wait.. that analogy might not work here..
by delichon on 2/12/24, 10:51 PM
I wish I could say the same about Freud, but that ladder is distressingly horizontal.
by atdt on 2/12/24, 11:08 PM
> Distributed understanding is a real phenomenon, but you have to get yourself into a community of communicators that can effectively summon the relevant expertise.
by FrustratedMonky on 2/12/24, 11:25 PM
Especially after all the debates on free will with Sapolsky.
Instead it ended up being backhanded self complement, more like, "a lot of other great people agree with me, so maybe I'm wrong, but probably not".
"Descartes’s theory of everything is, even in hindsight, remarkably coherent and persuasive. It is hard to imagine a different equally coherent and equally false theory! He was wrong, and so of course I may well be wrong, but enough other thinkers I respect have come to see things my way that when I ask myself, “What if we are wrong?” I can keep this skeptical murmur safely simmering on a back burner."
by mtlmtlmtlmtl on 2/12/24, 11:40 PM
by gwd on 2/13/24, 1:53 AM
by CornCobs on 2/13/24, 12:59 AM
An important part of being able to truly ask oneself if they are wrong is the humility to seriously consider an alternative. The author's treating of ID research as a foregone conclusion, even with his acknowledgment that we could be wrong in the next paragraph, seems rather ironic. Isn't it this kind of hubris that he is precisely calling out?
by ChrisMarshallNY on 2/13/24, 1:49 AM
I remember being told once, "Congratulations! It's your fault!". The thinking is that, if it's some[one|thing] else's fault, there's nothing I can do to change it, but if it's my fault, then I have the power to amend the situation.
In every conflict in my life; even when I am clearly in the right, and the other party is clearly in the wrong, I always have something to address, on my end. Sometimes, I may even need to apologize for it; which can really suck.
In my coding, I have found that writing unit tests always finds bugs. Happened to me yesterday, in fact. Since the test ran through 35,000 records, and took almost an hour, it was painful. I can't remember the last time that I wrote unit tests that didn't find bugs in the CuT.
But I am now satisfied that the code I wrote is top-shelf.
by DiscourseFan on 2/13/24, 2:15 AM
Well, would we call it a mistake if someone described what, empirically, was an ellipse, as a circle? The question itself "What if I'm wrong?" is flawed: we are always already wrong. But it is the wrongness which makes the world, for us; and to the extent our creations are false, to that same extent they are true. So why concern yourself with questions of true or false, right or wrong, Good and Evil? Go out, create your own truth, make the world anew...leave behind all this worrying over nothing.
by nonrandomstring on 2/12/24, 11:17 PM
by neilv on 2/13/24, 1:37 AM
Two professors from whom I was fortunate to learn, who did something like this in classes:
* Marvin Minsky (MIT) -- While he was researching The Emotion Machine, class sessions would often be him talking about whatever he'd been working on earlier that day, and related thoughts from his formidable knowledge, and people would ask questions and share information. For example, one day, general anesthesia came up, and a physician/surgeon who was sitting in on class that day added to that (something about, in some cases, the patient is conscious but doesn't remember after, which was a memorable idea to hear).
* Peter Wegner (Brown U.) -- He was working on theory of interactive models of computation (e.g., whether interacting objects were reducible to Turing Machines), and some days would put up drafts of a paper on a projector, for class discussion around them. IIRC, he'd first read sections of the paper, and then ask questions of the class around that. Of course, we learned more than he did, but perhaps we were also a helpful rubber duck on some ideas he was thinking through.
Also, drafts of textbooks are a thing: Leslie Kaelbling (then Brown U.) arranged to use draft copies of Norvig & Russell's intro AI book, which were two comb-bound volumes with unfinished bits, and IIRC we could feed back comments.
Which reminds me of the time I was taking classes at the community college, and the author of one of the textbooks was in the department (though not my instructor), so I wrote down some comments as I worked though the book. The author seemed kind and delighted to be getting book feedback from a student, even though I assume now that my comments weren't of any help.
by codeulike on 2/12/24, 11:16 PM
by wavemode on 2/12/24, 11:16 PM
I love this.
I go through similar experiences with software engineering. I notice some area of the field that appears overly complicated (build systems, CI/CD, version control, web frameworks, so on and so on) and start thinking to myself "Why all the complexity? Surely we could just-" and then I'm down a rabbit hole for weeks. The usual end result being I learn a lot of new things and discover for myself what all the complexity was for.
But hey, occasionally maybe I really do come up with a Next Big Thing.
by tsunamifury on 2/13/24, 12:25 AM
I struggle with this question in the same way I think as the author, but in technology we are afforded less time to ponder if we are wrong and more time to test if we are wrong.
However the author points out, even in testing as he does with his students, we can be wrong in a fundamental way that all the branches of my iterations stem from the wrong source.
So I’m left with: who thinks I’m wrong and why does that matter.
I’m finding that outside of reddit, very very few people will tell me im wrong and this is deeply frustrating. Really only my wife who is tired of my pondering fully engages in what might be wrong with what I’m working on and I’m thankful for that.
But I wish more people would help me be “constructively wrong” which means they understand the goal but want to correct the approach.
Most online merely want to point out irrelevant wrongness for sport.
by svat on 2/12/24, 11:29 PM
> I had found— and partly invented— a prodigious explanation- device that reliably devoured difficulties, day after day. The insights (if that is what they were) that I had struggled so hard to capture in my dissertation and my first book have matured and multiplied, generating answers to questions, solutions to problems, rebuttals to objections, and— most important— suggestions for further questions to ask with gratifying consilience. I just turn the crank and out they pour, falling into place like the last pieces in a jigsaw puzzle. Perhaps my whole perspective is a colossal mistake— some of my critics think so— and perhaps its abundant fruits are chimeras.
by a_square_peg on 2/13/24, 2:04 AM
by enonimal on 2/13/24, 2:33 PM
XD
by EVa5I7bHFq9mnYK on 2/13/24, 2:30 AM
by thomastjeffery on 2/13/24, 12:01 AM
by JonChesterfield on 2/12/24, 11:36 PM
That's unsound. It prevents learning anything which is not widely known and simply explained.
I don't know the author. All the context I have is the article up to that point where I lost interest. However yes, if all you try to learn are the trivial things everyone agrees on, for some circular definition of "wrong", you won't be wrong.
Bad strategy. High value are things few people know. Highest value are things people know to be true that are not so.