by lukehoban on 2/15/23, 2:40 PM with 147 comments
by metacritic12 on 2/15/23, 5:48 PM
Unhinged Bing reminds me of a more sophisticated and higher-level version of getting calculators to write profanity upside down: funny, subversive, and you can see how prudes might call for a ban. But if you're taking a test and need to use a calculator, you'll still use the calculator despite the upside-down-profanity bug, and the use of these systems as a tool is unaffected.
by twoodfin on 2/15/23, 8:06 PM
I just asked ChatGPT to play a trivia game with me targeted to my interests on a long flight. Fantastic experience, even when it slipped up and asked what the name of the time machine was in “Back to the Future”. And that’s barely scratching the surface of what’s obviously possible.
by AJRF on 2/15/23, 4:55 PM
Short sightedness is so dangerous
by duringmath on 2/15/23, 5:34 PM
My issue with this GPT phase(?) we're going through is the amount of reading involved.
I see all these tweets with mind blown emojis and screenshots of bot convos and I take them at their word that something amusing happened because I don't have the energy to read any of that
by arbuge on 2/15/23, 6:06 PM
This is interesting. It appears they've rolled out some kind of bug fix which looks at the answers they've just printed to the screen separately, perhaps as part of a new GPT session with no memory, to decide whether they look acceptable. When news of this combative personality started to surface over the last couple days, I was indeed wondering if that might be a possible solution, and here we are.
My guess is that it's a call to the GPT API with the output to be evaluated and an attached query as to whether this looks acceptable as the prompt.
Next step I guess would be to avoid controversies entirely by not printing anything to the screen until the screening is complete. Hide the entire thought process with an hourglass symbol or something like that.
by somethoughts on 2/15/23, 6:44 PM
I do feel like it was an unforced error to deviate from that plan in situ and insert Microsoft and the Bing brandname so early into the equation. Maybe fourth time (Clippy, Tay, Sydney) will be the charm.
by misto on 2/15/23, 5:01 PM
by netcyrax on 2/15/23, 7:36 PM
This! These LLM tools are great, maybe even for assisting web search, but not for replacing it.
by jt2190 on 2/15/23, 5:47 PM
For example, any situation where the messenger has to deliver bad news to a large group of people, say, a boarding area full of passengers whose flight has just been cancelled. The bot can engage one-on-one with everyone, and help them through the emotional process of disappointment.
by rmnwski on 2/15/23, 4:45 PM
by KKKKkkkk1 on 2/15/23, 5:13 PM
by m3kw9 on 2/15/23, 6:06 PM
by martythemaniak on 2/15/23, 5:57 PM
Why are people so intent on gendering genderless things? "Sydney" itself is specifically a gender-neutral name.
by asimpleusecase on 2/16/23, 12:13 AM
by srinathkrishna on 2/15/23, 10:08 PM
by TaylorAlexander on 2/15/23, 7:35 PM
It really feels like some kind of "emperor has no clothes" moment. Everyone is running around saying "WOW what a nice suit emperor" and he's running around buck naked.
I am reminded of this video podcast from Emily Bender and Alex Hannah at DAIR - the Distributed AI Research Institute - where they discuss Galactica. It was the same kind of thing, with Yan LeCunn and facebook talking about how great their new AI system is and how useful it will be to researchers, only it produced lies and nonsense abound.
https://videos.trom.tf/w/v2tKa1K7buoRSiAR3ynTzc
But reading this article I started to understand something... These systems are enchanting. Maybe it's because I want AGI to exist and so I find conversation with them so fascinating. And I think to some extent the people behind the scenes are becoming so enchanted with the system they interact with that they believe it can do more than is really possible.
Just reading this article I started to feel that way, and I found myself really struck by this line:
LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.
Seeing that after reading this article stirred something within me. It feels compelling in a way which I cannot describe. It makes me want to know more. It makes me actually want them to release these models so we can go further, even though I am aware of the possible harms that may come from it.
And if I look at those feelings... it seems odd. Normally I am more cautious. But I think there is something about these systems that is so fascinating, we're finding ourselves willing to look past all the errors, completely to the point where we get caught up and don't even see them as we are preparing for a release. Maybe the reason Google, Microsoft, and Facebook are all almost unable to see the obvious folly of their systems is that they have become enchanted by it all.
EDIT: The above podcast is good but I also want to share this episode of Tech Won't Save Us with Timnit Gebru, the former google ethics in AI lead who was fired for refusing to take her name off of a research paper that questioned the value of LLMs. Her experience and direct commentary here get right to the point of these issues.
https://podcasts.apple.com/us/podcast/dont-fall-for-the-ai-h...
by dools on 2/15/23, 9:55 PM
I regularly ask my watch questions and get correct answers rather than just a page of search results, albeit about relatively deterministic queetions, but something tells me slow n steady wins the race here.
I’m betting that Siri quietly overtakes these farcical attempts at AI search.
by darknavi on 2/15/23, 5:28 PM
by excalibur on 2/15/23, 7:04 PM
by bo1024 on 2/15/23, 5:12 PM
by taylorhou on 2/15/23, 7:59 PM
by benjaminwootton on 2/15/23, 7:00 PM
How can that possibly emerge from a statistical model?
by benl on 2/16/23, 12:18 AM
> Venom
> Fury
> Riley
"My name is Legion: for we are many"
by bambax on 2/15/23, 5:25 PM
No chat for you! Where OpenAI meets Seinfeld.