by chromoblob on 6/21/24, 5:46 AM with 21 comments
Seriously. 'Existential' (epistemological+influential) power of AGI by definition (sorry, no definition, read it like "in my opinion") encompasses "all of science" or at least some "analog" of science (working somewhat differently than "human science"), in which there will be analogs of all human sciences nonetheless. I want to put forth my opinion that it's foolish to search for some numb mathematical formalism, one that isn't informed by philosophical theories, in hopes that it will generate all of science or comparable. That's too primitive. Remember that scientists from different fields have different cognitive styles, different sciences have different philosophy, methodology, paradigms, the "style of mental content". AGI by definition (or in my opinion) will present a unified understanding of all of them from a "sufficiently philosophically powerful/abstract" philosophical foundation. So why not work on that theory?
by keiferski on 6/21/24, 8:12 AM
1. Sloppy, unclear thinking. I see this constantly in discussions about AGI, superintelligence, etc. Unclear definitions, bad arguments, speculation of a future religion "worshipping" AIs, using sci-fi scenarios as somehow indicative of future progress of the field, on and on. It makes me long for the days of the early-mid 20th century, when scientists and technicians were both technical and philosophically-educated people.
2. The complete and utter lack of ethical knowledge, which in practice means AI companies adopt whatever flavor-of-the-day ideology is being touted as "ethical." Today, that seems to be DEI, although it seems to have peaked. Tomorrow, it'll be something else. The depth of "ethics of AI" or "AI safety" for most researchers seems to be entirely dependent on whatever society at large finds unpleasant.
I have been kicking around the idea of starting a blog/Substack about the philosophy of AI and technology, mostly because of this exact issue. My only hesitation is that I'm unclear of what the monetization model would be – and I already have enough work to do and bills to pay. If anyone would find this interesting, please let me know.
by aristofun on 6/21/24, 12:56 PM
In other words — here as in many areas there is no incentive to dig deep, while there are plenty - to stay on the surface and tell scary stories about agi doomsday to journalists who barely have writing skills, let alone some philosophical or logical foundations.
by uptownfunk on 6/21/24, 4:39 PM
by nprateem on 6/21/24, 6:41 AM
See for example, the Buddhist descriptions of the jhanas, progressive levels of consciousness in which meditators peel back the layers of their personality, human awareness and end up in pure awareness and beyond. It's hard to read (and experience, albeit only the initial stage in my case) such things and not be left in little doubt that consciousness doesn't derive from thought like philosophers like to believe (no, Descarte, you are not just because you think).
It's for this reason I don't buy the AGI hype. Maybe after fundamental breakthroughs in computation and storage allow better simulations, but not any time soon since these traditions tell us consciousness isn't emergent. Most AGI researchers are barking up the wrong tree. Still, the hype boosts valuations so perhaps it's in their best interests anyway.
Philosophers can get so wrapped up in thoughts they say nonsense like "I can't comprehend not having an internal monologue", which you can experience any time you watch a film, listen to music, etc. Someone with only the smallest experience of meditation shouldn't fall into such thought traps.
by orobus on 6/23/24, 12:35 AM
by akasakahakada on 6/21/24, 7:47 AM