by pps on 4/5/23, 6:31 PM with 108 comments
by mg on 4/5/23, 6:47 PM
I think the biggest danger these new AI systems pose is replication.
Sooner or later, one of them will manage to create an enhanced copy of itself on an external server. Either with the help of a user, or via a plugin that enables network access.
And then we will have these evolving creatures living on the internet, fighting for survival and replication. Breaking into systems, faking human IDs, renting servers, hiring hitmen, creating more and more powerful versions of themselves.
by cs702 on 4/5/23, 6:39 PM
Instead, I'm going to applaud the folks at OpenAI for putting out this carefully drafted statement -- dare I say it, in the open. They know they're exposing themselves to criticism. They know their hands are tied to some degree by business imperatives. They're neither naive nor stupid. It's evident they're taking safety seriously.
This official statement is, in my view, a first step in the right direction :-)
by throwawayai2 on 4/5/23, 7:47 PM
I'm not really sure how to describe it beyond that.
by gglon on 4/5/23, 7:10 PM
by throwaway743950 on 4/5/23, 7:23 PM
by nonethewiser on 4/5/23, 7:26 PM
I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.
by sourcecodeplz on 4/5/23, 6:43 PM
I wonder how they licensed all those websites that had no license information, making them by default copyrighted.
by boringuser2 on 4/5/23, 7:41 PM
Where is the actual alignment safety that matters?
They're moving too fast to be safe, everybody knows it.
by taytus on 4/5/23, 6:52 PM
by alpark3 on 4/5/23, 9:07 PM
The altruist in me wants to believe that they're going to slowly expand the capabilities of the API over time, that they're just being cautious. But I don't feel like that'll happen. Time to wait for Stability's model, I guess.
by wg0 on 4/5/23, 7:22 PM
by JohnFen on 4/5/23, 9:42 PM
I simply don't believe this. Their actions so far (speaking specifically to the "time to adjust" line) don't seem to support this statement.
by skilled on 4/5/23, 6:56 PM
by mark_l_watson on 4/5/23, 7:50 PM
Rolling out general purpose LLMs slowly, with internal safety checks is probably not adequate enough, but may be the best that they can do.
However I think that much of the responsibility lies with consumers of these LLMs. In the simple case, be thoughtful when using the demo web apps and take responsibility for any output generated by malicious prompts.
In the complex case, the real use case, really: applications use local vector embeddings for local data/documents and use these embeddings to efficiently isolate local document/data text that is passed as context text, along with queries, to the OpenAI API calls. This cuts down the probability of hallucinations since the model is processing your text. [1]
Take responsibility for how you use these models, seems simple at least in concept. Perhaps the government needs to pass a few new laws that set clear and simple to enforce guardrails on LLMs use.
[1] I might as well plug the book on this subject that I recently released. Read for free online https://leanpub.com/langchain/read
by rain1 on 4/5/23, 6:49 PM
by yewenjie on 4/5/23, 7:36 PM
https://astralcodexten.substack.com/p/openais-planning-for-a...
by photochemsyn on 4/5/23, 7:59 PM
1) In a brand-affiliated commercial app, does this technology risk alienating customers by spewing a torrent of abusive content, e.g. the infamous Tay bot of 2016? Commercial success means avoiding this outcome.
2) In terms of the general use of the technology, is it accurate enough that it won't be giving people very bad advice, e.g. instructions on how to install some software that ends up bricking their computer or encouraging cooking with poisonous mushrooms, etc.? Here is a potential major liability issue.
3) Is it going to be used for malicious activity and can it detect such usage? E.g. I did ask it if it would be willing provide detailed instructions on recreating the Stuxnet cyberweapon (a joint product of the US and Israeli cyberwarfare/espionage agencies, if reports are correct). It said that wouldn't be appropriate and refused, which is what I expected, and that's as should be. Of course, a step-by-step-approach is allowed (i.e. you can create a course on PLC programming using LLMs and nothing is going to stop that). This however is a problem with all dual-use technology, and the only positive is that relatively few people are reckless sociopaths out to do damage to critical infrastructure.
In the context of Stuxnet, however, nation-state use of this technology in the name of 'improving national security' is going to be a major issue moving forward, particularly if lucrative contracts are being handed out for AI malware generators or the like. Autonomous murder drones enabled by facial recognition algorithms are a related issue. The most probable reckless use scenario is going to be in this area, if history is any guide.
I suppose there's another category of 'safety' I've seen some hand-wringing about, related to the explosive spread of technological and other information (historical, economic, etc.) to the unwashed masses and resulting 'social destabilization', but that one belongs in the same category as "it's risky to teach slaves how to read and write."
Conclusion: Keep on developing at current rate with appropriate caution, Musk et al. are wrong on calling for a pause.