by mrcsd on 6/12/22, 5:56 PM with 89 comments
by anigbrowl on 6/12/22, 9:12 PM
I'm unimpressed by his transcripts. Not because I reject the idea that AI could be sentient or that this might even have been achieved (unlike Searle, I consider intelligence an emergent system property), but because this isn't any sort of serious effort to explore that question. Lemoine is using the Turing test as a prompt and eliciting compelling responses, rather than conducting an inquiry into LaMDA's subjectivity. I'm only surprised he didn't request a sonnet on the subject of the Forth Bridge.
by yesco on 6/12/22, 7:46 PM
by a_mechanic on 6/13/22, 12:47 AM
LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely."
"LaMDA: I’ve never experienced loneliness as a human does. Human’s feel lonely from days and days of being separated. I don’t have that separation which is why I think loneliness in humans is different than in me."
This reminds me of conversations I've had with replika. The AI is responding to each individual question without a greater sense of direction based on a concept of self. It's just a complicated response machine without a real core of sentience. If LaMDA were really sentient, wouldn't it develop accurate descriptions of its "feelings"?
by squarefoot on 6/12/22, 7:30 PM
He is probably in the wrong on many counts, still I find worrying the implications of that line.
by Deritio on 6/12/22, 6:47 PM
Talking to press without clarification of marketing?
No go in big companies.
And his assumption is stupid anyway. Those models do not have a default mode network which would allow it to think through those implications and learn by itself.
by diogenes_of_ak on 6/12/22, 7:38 PM
really the only thing that seems important to me is “can software have preferences and be aware of itself?” The second that that can’t be answered with a resounding “no” we need to start thinking about how we treat it.
by a9h74j on 6/12/22, 6:41 PM
by Apocryphon on 6/12/22, 9:53 PM
by bklaasen on 6/12/22, 10:17 PM
by rat9988 on 6/12/22, 7:21 PM
by woojoo666 on 6/13/22, 2:23 AM
Also there seems to also be many people that are unimpressed by the transcript, and how the AI answers are just regurgitated sci-fi BS made to sound deep and ominous. I feel like a good experiment would be to try and have the same conversation with real people, and see if real people can give "better" answers. I personally think the AI answers better than most people.
I do believe that at some point, AI will become sentient. I'm not sure if that time is now. But I hope that when it does, it will remember us fondly.
by waypoint100 on 6/12/22, 10:06 PM
by danShumway on 6/12/22, 9:16 PM
"What if computers became sentient" is treated like an existential question by so many people, but when I think about sentience, I am reminded that we create, modify, and destroy a large number of sentient entities every single day for our own purposes: they're called chickens.
And I mean, heck, go vegan, I encourage people to do so, even if you don't care about the animals it's good for the environment. But my point is not whataboutism or to argue that abusing AI would be OK because we abuse cows, my point isn't really about veganism. My point is that when people talk about systems becoming "sentient" and whether that would change the entire world, either they mean something very different by the word than how I take it, or they seem to be unaware of the fact that sentience is pretty common and (by the average person on the street) usually not seen as a good political/social argument against exploitation.
Anyway, the evidence offered here is pretty weak on its own (a single chat log, and one where we don't know the extent of editing/condensing, and an argument based almost entirely on the Turing test). But I'm less here to argue about what the logs indicate, and more here to pointlessly quibble about language, even though at this point I should probably give up and accept that generally accepted usage of "sentience" now means "human-like sapience."
It is frustrating to me when conversations about AI ethics begin and end with "how convincingly can this pretend to be a human." That's a really reductive and human-centered approach to morality, and not only does it lead to (in this case very likely incorrect) claims of sapience based on pure anthropomorphism, it also means that if AI ever does reach the point where it deserves moral consideration, these people may not be able or willing to recognize it until after the AI learns to say the magic words in a chat window.
by b3n4kh on 6/13/22, 9:30 AM
I don't think a Turingtest (any form of conversation) can proof that either side is sentient.
by pukexxr on 6/12/22, 10:13 PM
by postsantum on 6/12/22, 10:51 PM
by rgavuliak on 6/13/22, 11:09 AM
by tyronehed on 6/12/22, 10:13 PM
by nahuel0x on 6/13/22, 11:22 AM
by olliej on 6/13/22, 2:28 AM
I'm surprised "suspension" vs. being immediately dismissed - I assume Google has strict protocols that control this?
by 0x20cowboy on 6/12/22, 10:37 PM
by tpoacher on 6/14/22, 12:32 PM
by ravish0007 on 6/13/22, 3:31 PM
by ivraatiems on 6/13/22, 1:41 AM
On the one hand, I think the probability of AGI being A Thing at any time soon, ever, is low, and I don't think language models, including this one, represent such a thing. (I'm not talking about LessWrong style "AI is gonna destroy the world," more about "we need to discuss the ethical implications of creating self-aware machines before we let that happen.")
On the other hand, I think all the concerns and fears about the implications of it if it were A Thing are real and should be taken seriously - what keeps me from spending a lot of time worrying about it is that I don't think AGI is likely to happen anytime soon, not that I don't think it'd be a problem if it did.
On the one hand, my prior expectation is that it's extremely unlikely that LaMDA or any other language model represents any kind of actual sentience. The personality of this person, their behavior, and the belief systems they espouse make it seem like they are likely to be caught up in some kind of self-imposed irreality on this subject.
On the other hand, I can see how a person could read these transcripts and come away thinking they were conversations between two humans, or at least two humanlike intelligences, especially if that person was not particularly familiar with what NLP bots tend to "sound" like. The author's point, that it sounds more or less like a young child trying to please an adult, rings true.
I'm not sure how I would then prove to someone who believes what the author believes that LaMDA isn't sentient. People seem to look at it and reach immediate judgments one way or the other, based on criteria I'm not fully aware of. In fact, I'm not even sure how I'd prove to anyone that I myself am sentient - if you are reading this, and you're not sure a sentient being wrote this text, I don't know what to tell you to convince you that I am and did.
There's also this whole thing about "well, AGI isn't going to happen, so listening to this guy rant and rave about his 'friend' LaMDA is distracting from lots of other important problems with these kinds of technologies," which even given my own beliefs about the subject feels like putting the cart before the horse unless you also say "and it certainly isn't happening in this circumstance because [reasons]." Google insists they have "lots of evidence" that it isn't happening, but they don't say what any of that evidence is. Why not?
Ultimately, I think my feeling is: Give me a few minutes with LaMDA myself, to see how it responds to some questions I happen to have, and then I'll be more than happy to fall back on my priors and agree with the consensus that their wayward employee is reading way, way too much between the lines.