by quietthrow on 11/19/23, 6:02 PM with 6 comments
Can’t help but connect dots of recent events at open AI where the board fired Altman for what seems like lack of alignment between profit motives and non profit (safety first) goals. It appears there is a tension between folks who want to progress AI fast (for profit by capturing more market share) vs folks who want to slow down the rate of progress for safety reasons (let the world be ready regulation and ethicswise for something so powerful )
Ironically seems like the real threat at this time to humans is not AI but Humans’ Greed and or Power.
The core of the question is there such a thing as progress that is too fast for one’s own good? Are there examples in history where this (fast progress) has back fired? Would like to see some comparisons from folks we versed in history
by siva7 on 11/19/23, 6:15 PM
by brucethemoose2 on 11/19/23, 6:24 PM
It appears safety tools are better developed as wrappers on a case-by-case basis.
Also, all the noise about emerging AGI is just fearmongering. Near future multimodal LLMs are extremely dangerous, but they are not AGI.
by andrewfromx on 11/19/23, 6:05 PM