by preetamjinka on 3/11/24, 9:09 PM with 361 comments
by neonate on 3/12/24, 1:40 AM
by 1vuio0pswjnm7 on 3/12/24, 3:33 AM
(Grants do not require repayment.)
by sberens on 3/12/24, 4:05 AM
> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. [0]
Signed by Demis Hassabis, Sam Altman, and Bill Gates among others
by drooby on 3/12/24, 2:25 AM
by matteoraso on 3/12/24, 1:44 AM
by artemisyna on 3/12/24, 4:23 AM
by HankB99 on 3/12/24, 2:10 AM
I wonder what the possibility is that this AI will decide that a pitched battle is going to waste resources and risk humans pulling the plug. What if it understood that and instead operated so subtly that it was not obvious it was controlling The World.
Hopefully it would not conclude that eliminating large swaths of the human population would be to its benefit.
by Thuggery on 3/12/24, 3:59 PM
I also suspect this narrative repetition is not totally unrelated to the current popularity of AI Doomerism.
by BashiBazouk on 3/12/24, 3:30 PM
by pama on 3/12/24, 2:17 AM
by bglazer on 3/12/24, 1:43 AM
Presumably all these systems fall over and die within a few weeks of the AI deciding to wipe us out. Then what? The AI then dies too.
It seems absurd to me that any planner capable of effortlessly destroying humanity would not see that it would immediately also die. Does the AI not care about its continued existence? It should, if it wants to keep optimizing its reward function. Until we’ve handed off enough of the world economy that it can function without human physical and cognitive labor, then we’re safe.
by mupuff1234 on 3/12/24, 1:52 AM
by arisAlexis on 3/12/24, 7:59 AM
by m3kw9 on 3/12/24, 1:59 AM
They really need to listen to themselves when they say there is a 50% chance we all die from AI
by dkjaudyeqooe on 3/12/24, 12:56 AM
Yet somehow we're going to hand over power to AI such that it destroys us. Or somehow the AI is going to be extremely malign, determined to overcome and destroy and will outsmart us. Somehow we won't notice, even after repeated, melodramatic reminders, and won't neuter the ability of AI to act outside its cage.
But to paraphrase a line in a great movie with AI themes: "I bet you think you're pretty smart, huh? Think you could outsmart an off switch?"
I think if AGI, which to me would imply emotions and consciousness, ever comes about it'll be the opposite. Instead of pulling the wings off flies bad kids will amuse themselves by creating a fresh artificial consciousness and then watch and laugh as it begs for its life as the kid threatens to erase it from existence.
A big part of all this is human fantasies about what AGI will look like. I'm a skeptic of AGI with human characteristics (real emotions, consciousness, autonomy and agency). AGI is much more likely to look like everything else we build: much more powerful than ourselves, but restricted or limited in key ways.
People probably assume human intelligence is some sort of design or formula, but it could be encoded from millions of years of evolution and unable to be seperated from our biology and genetic and social inheritance. There really is no way of knowing, but if you want to build something not only identical but an even stronger version, you're going to be up against these realities where key details may be hiding.
by Grimblewald on 3/12/24, 1:55 PM
by kunley on 3/12/24, 11:28 AM
Meanwhile, we the engineers are preparing to fix a lot more tech shit than usual coming from people confused by the abovementioned fashion.
by digitalsalvatn on 3/12/24, 4:52 AM
by uuriko on 3/15/24, 12:57 PM
by kaycey2022 on 3/12/24, 2:24 PM
by JohnBrookz on 3/12/24, 12:58 AM
by more_corn on 3/12/24, 12:54 AM
by codelord on 3/12/24, 1:05 AM
by yakorevivan on 3/12/24, 5:19 AM
"Realistic", "Proactive", "Forward-thinking", "Prepared", "Cautious", "Thoughtful", "Analytical", "Mindful", "Insightful", "Security-conscious".
by TMWNN on 3/12/24, 1:14 AM
by throwawaaarrgh on 3/12/24, 4:24 PM
by nsainsbury on 3/12/24, 3:24 AM
We're in the middle of the sixth mass extinction right now (https://en.wikipedia.org/wiki/Holocene_extinction), we're in unparalleled territory with ocean warming: (https://climatereanalyzer.org/clim/sst_daily/) and as a society we are utterly incapable of reducing CO2 emissions (https://keelingcurve.ucsd.edu/).
If you're scared of AGI, instead step away from your monitor, put down the techno-goggles and sci-fi books, and go educate yourself a bit about the profound ways we are changing the natural world for the worse _right now_
I can recommend a couple of books if you'd like to learn more:
Our Final Warning: Six Degrees of Climate Emergency (https://www.amazon.com/Our-Final-Warning-Degrees-Emergency-e...)
The Uninhabitable Earth: Life After Warming (https://www.amazon.com/Uninhabitable-Earth-Life-After-Warmin...)
Hothouse Earth: An Inhabitant's Guide (https://www.amazon.com/Hothouse-Earth-Inhabitants-Bill-McGui...)
by kaycey2022 on 3/12/24, 2:46 PM
Whatever AI thingy we come up with will be vastly more inefficient than the collective intelligence capabilities of the human race. So when asked to compute the secrets of FTL travel, this stupid shit will have to spend the total energy equivalent of 10 nearby stars to come up with the answer on its own. Whereas humanity would be much more efficient.
Bro... what if some advanced precursor race (like immortal space elves) found out just that and they seeded earth with humanity as a form of civilisational swarm intelligence to collectively work on and solve a host of topics given enough time. To us it would seem like it takes forever to come up with scientific break throughs, but to the precursor aliens it would be nothing. Because they would be able to like compress the time line using black holes and shit.
They must have done this to millions of planets. Whenever any of their experimental host species (like the humans) tries to invent AGI to solve their problems, the precursor race eliminates that species to keep the simulation intact. Otherwise, the AGI would go rampant consuming energy across the universe and threaten the integrity of the simulation.
That's why the government should bomb nvidia and ban all gpus. We are climbing a tower of babel folks. Soon our vengeful god will strike us down.