from Hacker News

Among the A.I. doomsayers

by preetamjinka on 3/11/24, 9:09 PM with 361 comments

  • by neonate on 3/12/24, 1:40 AM

  • by 1vuio0pswjnm7 on 3/12/24, 3:33 AM

    Always all-or-nothing thinking from these folks. Like what they are working on can never be just another boring thing that nerds find entertaining. No, it has to be "world-changing". Gonna "change the world" (for the better or the worse?) while sitting behind a keyboard. Except they do not know how to write. Overlook the important details and exaggerate, communicating in hyperbolic, know-it-all nerd gibberish.

    (Grants do not require repayment.)

  • by sberens on 3/12/24, 4:05 AM

    I'd like to remind people of this single-sentence statement:

    > Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. [0]

    Signed by Demis Hassabis, Sam Altman, and Bill Gates among others

    [0] https://www.safe.ai/work/statement-on-ai-risk

  • by drooby on 3/12/24, 2:25 AM

    AGI escaping a sandbox is truly terrifying. There will be a subgroup of the population that will worship it and work for it. It's not so much AGI that scares me - it's the humans I'm scared of.
  • by matteoraso on 3/12/24, 1:44 AM

    I've always wondered why people think that an AI that's super-intelligent will also be evil. It could just as likely end up being very kind (more than likely, actually, because the programmers would have safeguard to ensure that it's nice).
  • by artemisyna on 3/12/24, 4:23 AM

    It's fun seeing how much threads like this are mostly folks' hot takes on the title as opposed to anything in specific to do with the article.
  • by HankB99 on 3/12/24, 2:10 AM

    If I let my imagination run wild, I can imagine that at some point AI becomes in some way sentient. By that I mean it gains reasoning, some sort of understanding, and motivation.

    I wonder what the possibility is that this AI will decide that a pitched battle is going to waste resources and risk humans pulling the plug. What if it understood that and instead operated so subtly that it was not obvious it was controlling The World.

    Hopefully it would not conclude that eliminating large swaths of the human population would be to its benefit.

  • by Thuggery on 3/12/24, 3:59 PM

    It seems to me nearly every story about robots that gain sentience in human storymaking has them eventually turning against their human creators. Even the word robot itself comes from a Czech play were men develop an artificial human and these "robots" then ... usurp and destroy their creators. Am I the only one that finds this interesting and odd?

    I also suspect this narrative repetition is not totally unrelated to the current popularity of AI Doomerism.

  • by BashiBazouk on 3/12/24, 3:30 PM

    Who knows what AI will really do to society but from the predictions of science fiction I would think it would pan out closer to William Gibson's Sprawl trilogy than the Terminator/singularity fears I have seen in so much doomsayer hand waving. Google seems not too far behind OpenAI and who knows what the NSA and similar government agencies have been building. That if consciousness is created from one of these models, many nation states and large corporations will have their own conscious models long before the giant robot factories can be spun up and supplied with enough power to take over. Most will be limited to the input they are given and will be advanced tools, but a few might become twisted much like the human mind occasionally does over time. I really do think Gibson got AI right...
  • by pama on 3/12/24, 2:17 AM

    Many of the people in this engaging story feel to me like creative children in parties full of gossip and pseudo-philosophical chats about the future. Luckily there are enough people globally who build things that will reduce human suffering and enable more people to enjoy their lives in whatever way they want (and dinner parties would be high on my list). I guess I am not a doomsayer nor an e/acc, and I see tons of benefit from our current path towards stronger AI.
  • by bglazer on 3/12/24, 1:43 AM

    In any doom scenario, like let's say >90% of humans dead, who runs the power plants that supply the AI data centers? Who runs the fabs producing more chips? Who maintains the plumbing?

    Presumably all these systems fall over and die within a few weeks of the AI deciding to wipe us out. Then what? The AI then dies too.

    It seems absurd to me that any planner capable of effortlessly destroying humanity would not see that it would immediately also die. Does the AI not care about its continued existence? It should, if it wants to keep optimizing its reward function. Until we’ve handed off enough of the world economy that it can function without human physical and cognitive labor, then we’re safe.

  • by mupuff1234 on 3/12/24, 1:52 AM

    It seems much more likely to me that the harm from AI will be higher unemployment rate than it will be some evil AI trying to destroy humanity.
  • by arisAlexis on 3/12/24, 7:59 AM

    As of today, the first government report is also a doomsayer. Maybe change the term to "officially worried experts
  • by m3kw9 on 3/12/24, 1:59 AM

    Beware if someone assigns a p(doom). There is zero chance they know and 100% it came out of their backside. The only plausible p doom is a range of >0 and <100

    They really need to listen to themselves when they say there is a 50% chance we all die from AI

  • by dkjaudyeqooe on 3/12/24, 12:56 AM

    People have been watching too many movies. War Games, Terminator, it's not like we haven't been forewarned of the dangers.

    Yet somehow we're going to hand over power to AI such that it destroys us. Or somehow the AI is going to be extremely malign, determined to overcome and destroy and will outsmart us. Somehow we won't notice, even after repeated, melodramatic reminders, and won't neuter the ability of AI to act outside its cage.

    But to paraphrase a line in a great movie with AI themes: "I bet you think you're pretty smart, huh? Think you could outsmart an off switch?"

    I think if AGI, which to me would imply emotions and consciousness, ever comes about it'll be the opposite. Instead of pulling the wings off flies bad kids will amuse themselves by creating a fresh artificial consciousness and then watch and laugh as it begs for its life as the kid threatens to erase it from existence.

    A big part of all this is human fantasies about what AGI will look like. I'm a skeptic of AGI with human characteristics (real emotions, consciousness, autonomy and agency). AGI is much more likely to look like everything else we build: much more powerful than ourselves, but restricted or limited in key ways.

    People probably assume human intelligence is some sort of design or formula, but it could be encoded from millions of years of evolution and unable to be seperated from our biology and genetic and social inheritance. There really is no way of knowing, but if you want to build something not only identical but an even stronger version, you're going to be up against these realities where key details may be hiding.

  • by Grimblewald on 3/12/24, 1:55 PM

    Call it a conspiracy, but I think a lot of this doom and terror hype around AI is part of a bigger play to try push through laws that prevent open source AI work, since this directly undermines corpo rats ability to gouge humanity without having to actually work at providing a decent product.
  • by kunley on 3/12/24, 11:28 AM

    Same old shit since decades about any fashionable topic that was made fashionable because certain individuals invested money in it.

    Meanwhile, we the engineers are preparing to fix a lot more tech shit than usual coming from people confused by the abovementioned fashion.

    Also: https://mastodon.social/@nixCraft/112074367321254656

  • by digitalsalvatn on 3/12/24, 4:52 AM

    ai is going to bring about the singularity. when robots can do everything, no one needs to work, and the humans get to do whatever they want and pursue their passions.
  • by uuriko on 3/15/24, 12:57 PM

    accelerate
  • by kaycey2022 on 3/12/24, 2:24 PM

    What a bunch of decel bullshit... Or is that a load, or a bunch of loads?
  • by JohnBrookz on 3/12/24, 12:58 AM

    I would love to smoke what the AI optimists are smoking. In a society where only 80% of people are barely literate and at least a quarter are voting for a candidate that spouts conspiracy theories I highly doubt that whatever AI has to offer will benefit society.
  • by more_corn on 3/12/24, 12:54 AM

    Paywalled
  • by codelord on 3/12/24, 1:05 AM

    We live in a world where proven maniacs (e.g. Putin) have access to an arsenal of nuclear weapons that can essentially make the earth uninhabitable for all humans. That's a very real possibility (with no ifs and buts and maybes) that exists now and we have learned to live with it. Yet somehow the hypothetical scenario of a human exterminator super-intelligent AI is getting all the coverage.
  • by yakorevivan on 3/12/24, 5:19 AM

    Instead of trying to show them in negative light by calling them as "doomsayers" , let's call them one of the following instead, which is much better:

    "Realistic", "Proactive", "Forward-thinking", "Prepared", "Cautious", "Thoughtful", "Analytical", "Mindful", "Insightful", "Security-conscious".

  • by TMWNN on 3/12/24, 1:14 AM

  • by throwawaaarrgh on 3/12/24, 4:24 PM

    OK doomer.
  • by nsainsbury on 3/12/24, 3:24 AM

    We're not going to make it to anywhere close to AGI before we see widespread and systematic societal and environmental collapse on almost all fronts due to climate change.

    We're in the middle of the sixth mass extinction right now (https://en.wikipedia.org/wiki/Holocene_extinction), we're in unparalleled territory with ocean warming: (https://climatereanalyzer.org/clim/sst_daily/) and as a society we are utterly incapable of reducing CO2 emissions (https://keelingcurve.ucsd.edu/).

    If you're scared of AGI, instead step away from your monitor, put down the techno-goggles and sci-fi books, and go educate yourself a bit about the profound ways we are changing the natural world for the worse _right now_

    I can recommend a couple of books if you'd like to learn more:

    Our Final Warning: Six Degrees of Climate Emergency (https://www.amazon.com/Our-Final-Warning-Degrees-Emergency-e...)

    The Uninhabitable Earth: Life After Warming (https://www.amazon.com/Uninhabitable-Earth-Life-After-Warmin...)

    Hothouse Earth: An Inhabitant's Guide (https://www.amazon.com/Hothouse-Earth-Inhabitants-Bill-McGui...)

  • by kaycey2022 on 3/12/24, 2:46 PM

    My theory/guess is that there is an undiscovered law of conservation of intelligence like there is a known law of conservation of energy. Intelligence might even be subject to the energy law in unknown ways.

    Whatever AI thingy we come up with will be vastly more inefficient than the collective intelligence capabilities of the human race. So when asked to compute the secrets of FTL travel, this stupid shit will have to spend the total energy equivalent of 10 nearby stars to come up with the answer on its own. Whereas humanity would be much more efficient.

    Bro... what if some advanced precursor race (like immortal space elves) found out just that and they seeded earth with humanity as a form of civilisational swarm intelligence to collectively work on and solve a host of topics given enough time. To us it would seem like it takes forever to come up with scientific break throughs, but to the precursor aliens it would be nothing. Because they would be able to like compress the time line using black holes and shit.

    They must have done this to millions of planets. Whenever any of their experimental host species (like the humans) tries to invent AGI to solve their problems, the precursor race eliminates that species to keep the simulation intact. Otherwise, the AGI would go rampant consuming energy across the universe and threaten the integrity of the simulation.

    That's why the government should bomb nvidia and ban all gpus. We are climbing a tower of babel folks. Soon our vengeful god will strike us down.