by robg on 1/19/25, 6:49 PM with 55 comments
by scoofy on 1/19/25, 9:15 PM
If anyone has access to the full article, I’m interested, but it sounds like a lot of buzzwords and not a ton of substance.
The framing of ai through a philosophical lens is obviously interesting, but a lot of the problems addressed in the intro are pretty much irrelevant to the ai-ness of the information.
by Terr_ on 1/19/25, 7:05 PM
As a skeptic with only a few drums to beat, my quasi-philosophical complaint about LLMs: we have a rampant problem where humans confuse a character they perceive out of a text-document with a real-world author.
In all these hyped-products, you are actually being given the "and then Mr. Robot said" lines from a kind of theater-script. This document grows as your contribution is inserted as "Mr. User says", plus whatever the LLM author calculates "fits next."
So all these excited articles about how SomethingAI has learned deceit or self-interest? Nah, they're really probing how well it assembles text (learned from ones we make) where we humans can perceive a fictional character which exhibits those qualities. That can including qualities we absolutely know the real-world LLM does not have.
It's extremely impressive compared to where we used to be, but not the same.
by tomlockwood on 1/19/25, 10:52 PM
This article makes a revelation of the pretty trivially true claim that philosophy is an undercurrent of thought. If you ask, why do we do science, the answer is philosophical.
But the mistake many philosophers make is extrapolating philosophy being a discipline that reveals itself when fundamental questions about an activity are asked, into a belief that philosophy, as a discipline, is necessary to that activity.
AI doesn't require an understanding of philosophy any more than science does. Philosophers may argue that people always wonder about philosophical things, like, as the article says, teleology, epistemology and ontology, but that relation doesn't require an understanding of the theory. A scientist doesn't need to know any of those words to do science. Arguably, a scientist ought to know, but they don't have to.
The article implies that AI leaders are currently ignoring philosophy, but it isn't clear to me what ignoring the all-pervasive substratum of thought would look like. What would it look like for a person not to think about the meaning of it all, at least once at 3am at a glass outdoor set in a backyard? And, the article doesn't really stick the landing on why bringing those thoughts to the forefront would mean philsophy will "eat" AI. No argument from me against philosophy though, I think a sprinkling of it is useful, but a lack of philosophy theory is not an obstacle to action, programming, creating systems that evaluate things, see: almost everyone.
by kelseyfrog on 1/19/25, 8:09 PM
by polotics on 1/19/25, 9:03 PM
by redelbee on 1/19/25, 9:29 PM
Jests aside, I love the idea of incorporating an all encompassing AI philosophy built up from the rich history of thinking, wisdom, and texts that already exist. I’m no expert, but I don’t see how this would even be possible. Could you train some LLM exclusively on philosophical works, then prompt it to create a new perfect philosophy that it will then use to direct its “life” from then on? I can’t imagine that would work in any way. It would certainly be entertaining to see the results, however.
That said, AI companies would likely all benefit from a team of philosophers on staff. I imagine most companies would. Thinking deeply and critically has been proven to be enormously valuable to humankind, but it seems to be of dubious value to capital and those who live and die by it.
The fact that the majority of deep thinking and deep work of our time serves mainly to feed the endless growth of capital - instead of the well-being of humankind - is the great tragedy of our time.
by alganet on 1/19/25, 9:56 PM
It's not eating AI. It's "eating" the part of AI that was tuned to disproportionally change the natural balance of philosophy.
Trying to get on top of it is silly. The debug mode is not for sale.
by laptopdev on 1/19/25, 9:17 PM
by antonkar on 1/19/25, 10:44 PM
by mibes on 1/19/25, 10:59 PM
by htk on 1/20/25, 3:44 PM
by Onavo on 1/19/25, 9:31 PM
by Sleaker on 1/19/25, 8:14 PM
by jonahbenton on 1/20/25, 5:51 PM
by treksis on 1/19/25, 9:56 PM
by treksis on 1/19/25, 9:56 PM
by floppiplopp on 1/20/25, 12:06 PM
by initramfs on 1/19/25, 8:52 PM
https://en.wikipedia.org/wiki/The_Creepy_Line Algorithms create a compression of search values not unlike a Cartesian plane.
The question is, will more people embrace the Cartesian compression of ubiquitous internet communication?
by qrsjutsu on 1/19/25, 8:26 PM
what the fuck. they haven't even done that with post 90's technology in general and it's not only that no intelligent person wants to work among them that they will fall just as short with AI. I'm still grateful they are doing a job.
but please, a dying multitude right at your feet and all you need to save - so you can learn even more from - them in your hands and you scale images, build drones for cleaning at home and war and imitate to replace people who love or need their jobs.
and faking all those AI gains - deceit, self-interest and what not - is so ridiculously obvious just build-in linguistics that can be read from a paper by someone who does not even speak that language. it's "just" parameters and conditional logic, cool and fancy and ready to eat up and digest almost any variation of user input, but it's nowhere even close to intelligence, let alone artificial intelligence.
philosophy eats nothing. there's those on all fours waiting for whatever gives them status and recognition and those who, thankfully, stay silent to not give those leaders more tools of power.