by Neuro_Gear on 3/25/23, 4:21 PM with 10 comments
by Manjuuu on 3/25/23, 4:45 PM
Btw, only 4 digits of the card details were accessible, official announcement: https://openai.com/blog/march-20-chatgpt-outage
by mdmglr on 3/25/23, 5:18 PM
So OpenAI should take cybersecurity seriously. Credit card details are nothing compared to the chat logs. Chat logs will be of high value.
Also I’ve seen the idea floating around, especially with typed languages like TypeScript, that developers write just the signature of function and have GPT/Copilot implement it. If developers trust the output and don’t care… What are the chances someone can trick GPT into producing unsafe code? There are attack vectors via the chat interface, training data, physical attacks via employees. Phishing an OpenAI employee to gain convert access to the infra/model.
If I was an intelligence agency, gaining covert access to OpenAI backend would be primary objective.
by Alifatisk on 3/25/23, 6:13 PM
Update: Nvm, it was redis-py. https://openai.com/blog/march-20-chatgpt-outage
by Neuro_Gear on 3/25/23, 4:43 PM
by 4ndrewl on 3/25/23, 4:42 PM