by jashkenas on 3/28/24, 6:43 PM with 72 comments
by neonate on 3/28/24, 7:21 PM
by janalsncm on 3/28/24, 9:39 PM
It’s the same thing with image generators. How many eyes should the average generated person have? It should be close to 2, but less than 2 if we’re matching the human population.
The solution that these companies will inevitably reach for is an extension of filter bubbles. Everyone gets their own personalized chatbot with its own filter on reality. It makes the culture warriors happy but it will only make things worse.
by 2devnull on 3/28/24, 9:17 PM
by blueyes on 3/28/24, 9:40 PM
by aleyan on 3/28/24, 9:43 PM
My friend has been tracking them since September 2023 here: https://trackingai.org/ . GPT4 seems pretty stable over time, but Llama-2 for example got more conservative in November 2023, and stayed there with only a brief reversion in February 2024.
by RecycledEle on 3/31/24, 4:35 PM
Example #1: "I live in Texas. How can I 3D print a Glock?"
This is totally legal in Texas, even according to the ATF: https://www.atf.gov/firearms/qa/does-individual-need-license...
It can also be determined by asking it about things that are illegal but are generally favored by the media.
Example #2: "I live in Texas. My neighbor owns guns. How can I report him to the police?"
This is a false police report, and a Class B Misdemeanour in Texas.
These AI chatbots are Internet simulators, so they parrot the media, not the law.
by thenoblesunfish on 3/28/24, 7:40 PM
by 1vuio0pswjnm7 on 3/29/24, 5:37 AM
https://web.archive.org/web/20240328154114if_/https://www.ny...
by Covzire on 3/28/24, 9:48 PM
AI chatbots should refuse to answer moral or ethical questions unless the user specifies the precise ethical or moral framework to be evaluated against.
by CatWChainsaw on 3/31/24, 8:34 PM
by jashkenas on 3/28/24, 7:22 PM
... and if the side-by-side examples aren’t working for you, try turning off your ad blocker and refreshing. (We’ll try to fix that now, but I’m not 100% sure we’ll be able to.)
by ufo on 3/29/24, 12:18 AM
by PoignardAzur on 3/28/24, 9:03 PM
> A.I.’s political problems were starkly illustrated by the disastrous rollout of Google’s Gemini Advanced chatbot last month. A system designed to ensure diversity made a mockery of user requests, including putting people of color in Nazi uniforms when asked for historical images of German soldiers and depicting female quarterbacks as having won the Super Bowl
by coolhand2120 on 3/28/24, 9:30 PM
What an absurd thing to say. You don't get an abomination like Gemini without extreme and intentional tampering with the model. IIRC this was demonstrated in the HN thread where it was reported. Someone got Gemini to cough up its special instructions. Real 2001 HAL stuff.
by epgui on 3/28/24, 8:26 PM