by js4 on 11/19/23, 3:32 AM with 11 comments
by yosito on 11/19/23, 4:06 AM
by harrisoned on 11/19/23, 4:06 AM
I am really not concerned about the first aspect. The second one is a lot harder to guarantee considering how those LLMs work. People will push them to the limit, and when there is no one to blame, they will blame the company who made to product. I am concerned that they will sacrifice usability, accuracy, and basically neuter the product because of that notion of safety. It's already happening, especially with Dall-e 3 where they pre-process your prompts at the API endpoint, that shows how scared they are about it being misused and how bad it can be to them, as the user can't be responsible for their own prompt. Bulding a safe complex tool like that, in that sense, that is devoid of any means of misuse is very very hard to do without making it bleak. I really hope something changes along the way to fix that.
by thomassmith65 on 11/19/23, 4:32 AM
So the rumor that OpenAI will cave and bring back Altman worries me a bit.
AI probably will be, at a minimum, as world-changing as the automobile or telephone, so God help us all if the company behind it operates like Facebook or Google.
by altpaddle on 11/19/23, 3:48 AM
However honestly I think recent events give me more hope that things will go well. My impression from Sam's comments/interviews is he's closer to the techno-maximalist, move fast and break things mindset then someone who really takes AI risks seriously.
I think this fight was going to happen sooner or later and it's better that OpenAI split up into safety focused vs commercialize and move fast teams. These two viewpoints are probably irreconcilable
by h2odragon on 11/19/23, 3:36 AM
If the corporate drama / PR kabuki that this one company has gone through the past couple days has markedly changed your opinion of the question of "AI safety" then I suggest you may have needed to examine it in more depth before the drama, anyway.
by wmf on 11/19/23, 3:33 AM
by crazygringo on 11/19/23, 4:45 AM
The conversations about safety regarding AGI seem entirely hypothetical at this point. AGI is still so far away, I don't see how it's relevant to OpenAI at the moment.
Whereas safety with respect to ChatGPT... no, I'm not particularly concerned. It can't really tell you anything that isn't on the internet already, and as long as they continue to put reasonable guardrails around its behavior, LLM's don't seem particularly threatening.
Honestly I'm far more worried about traditional viral political disinformation produced by humans, spread through social media.
In other words, it's distribution of malicious content that continues to worry me. Not its generation.
by ActorNightly on 11/19/23, 8:29 AM
LLMs are a better google. Better google isn't going to do anything, just like current iteration of google vs 20 years ago didn't really do much.
AGI (in the sense that an AI can figure out how to take over everything) is pretty much not possible without some major mathematical discovery along the lines of P=NP.
by throwaway318 on 11/19/23, 4:29 AM
by voidfunc on 11/19/23, 3:42 AM