from Hacker News

AI Safety and the Age of Dislightenment

by wskinner on 7/11/23, 2:28 PM with 214 comments

  • by waihtis on 7/11/23, 3:27 PM

    Whether or not the existential risk around AI is real, governments, corporations, NGO's and others will twist this to their own purpose to build censorship, entry barriers, and other unpleasantries. We should be mindful of that.
  • by waffletower on 7/11/23, 4:22 PM

    "Existential risk" truly is a misnomer at this time. Please use the better fitting: "existential fear". AI "risk" could instead be applied to global information infrastructure, a collapse of the internet by rogue AI bots etc. There are too many technological challenges for any AI to overcome in robotics, mobile energy, material and bio printing/fabrication, and supply chain dominance for an imminent existential risk to humanity to be a credible near-term threat. LLM performance is a gift of data availability of the internet. There aren't equivalent datasets available for the challenges articulated above. How likely can they materialize, particularly by machines that do not yet have a tactile interface and understanding of the physical universe? Perhaps a malevolent machine sentience is possible at some time in the future-- but there are many undeveloped technologies required to pose a threat to human existence.
  • by FrustratedMonky on 7/11/23, 4:04 PM

    Netflix documentaries aren't always the most 'academic', or 'robust'.

    But the one that was just released about AI for autonomous drones, is pretty shocking.

    The military side is farther along than I thought, AND, you know what is appearing in a Netflix documentary is old, it isn't the latest things that are still secret.

    In any case, why it relates to this article.

    We are in a race with other nation states, so all of this discussion about 'curtailing' or limiting AI, is all just grand standing, smoke screen.

    Nothing will slow down development because other countries are also developing AI, and everyone firmly believes they have to be first or will be wiped out. Hence no one will slow down.

  • by valine on 7/11/23, 3:28 PM

    The author seems to be entirely unaware of the current crop of open source / public language models like LLaMA and Falcon. People are already engaging in the behavior that, according to the article, might present an existential threat to humanity.

    The existential threat argument is a ridiculous notion to begin with. Pretending open source LLMs don’t already exist makes the argument even more silly.

  • by 300bps on 7/11/23, 3:20 PM

    The next big trend in AI is it dying the death of a thousand cuts in the form of copyright infringement lawsuits. Example:

    https://www.theverge.com/2023/7/9/23788741/sarah-silverman-o...

    Imagine companies thinking they can scrape all the world's information for free and then package it up and sell it.

  • by phillipcarter on 7/11/23, 4:27 PM

    IMO this paragraph in the summary is the most important one:

    > There are interventions we can make now, including the regulation of “high-risk applications” proposed in the EU AI Act. By regulating applications we focus on real harms and can make those most responsible directly liable. Another useful approach in the AI Act is to regulate disclosure, to ensure that those using models have the information they need to use them appropriately.

  • by guy98238710 on 7/11/23, 4:18 PM

    > Artificial Intelligence is moving fast

    We wish. AI is hardware-limited and hardware is not moving fast. We are very, very far from matching raw compute power of the human brain. Robots are even more limited compared to human body.

  • by waffletower on 7/11/23, 4:25 PM

    "Historically, large power differentials have led to violence and subservience of whole societies." I agree, the calls for regulation by large corporate entities who quite transparently wish to control AI development for themselves, is a much greater near-term "existential threat" than AI itself.
  • by guy98238710 on 7/11/23, 5:14 PM

    I don't get this scare about AI. Imagine, you are a genius with IQ 200. You have studied hard your entire life and you are now an expert in several fields. One day you decide to conquer the world. Will you succeed or not?
  • by jononomo on 7/11/23, 3:24 PM

    It is wild how seriously people are taking the development of these LLMs. There is talk of them leading to the extinction of life while neglecting to consider that we can't even build self-driving cars, let alone have a robot clear the table after dinner.

    Also, many people seem to be mistaking "artificial intelligence" for "actual intelligence".

    You know that these LLMs are not actually intelligent, right? Right???

  • by arisAlexis on 7/11/23, 3:43 PM

    "The reason this distinction is critical is because these models are, in fact, nothing but mathematical functions. They take as input a bunch of numbers, and calculate and return a different bunch of numbers. They don’t do anything themselves — they can only calculate numbers."

    This is entirely false, sounds like Andersen that seems incapable of understanding that agency and goals will be a thing.

  • by cubefox on 7/11/23, 3:59 PM

    Here is different article arguing for the same thesis, i.e. that AI regulation increases AI risk, but IMHO with better arguments:

    https://www.lesswrong.com/posts/6untaSPpsocmkS7Z3/ways-i-exp...

  • by tyre on 7/11/23, 3:38 PM

    I'm empathetic to the desire for innovative freedom and democratization of technology, but the benefit has to be balanced with the scale of impact and the possibility of mitigating/reparative response.

    The capabilities of current generation AI—LLMs + audio/image/video generative models—are far enough already that wide-scale distribution is extremely dangerous.

    Social media allowed for broadcasting from 1:many and 1:1 with troll armies, but LLMs are on a whole other planet of misinformation. A scalable natural language interface can target people at the one-to-one conversation level and generate misinformation ad hoc across text, voice, images, and video.

    The trouble with this approach is that harm will rack up way faster than liability can catch up, with irreversible and sometimes immeasurable long term effects. We have yet to contend with the downstream effects of social media and engagement hacking, for example, and suing Facebook out of existence wouldn't put a dent in it.

    Current generation AI has enough capabilities to swing the 2024 US presidential election. It can be used today for ransom pleas trained on your childrens' instagram posts. It doesn't seem farfetched that AI could start a war. Not because it's SkyNet, but because we've put together the tools to influence already-powerful-enough human forces into civilization-level catastrophe.

  • by Tenoke on 7/11/23, 5:31 PM

    >As we’ll see, if AI turns out to be powerful enough to be a catastrophic threat, the proposal may not actually help. In fact it could make things much worse, by creating a power imbalance..

    Power imbalance is not 'much worse' than extinction.

    It's very clear how people start with the conclusion - "openess and progress have been good and will keep being the best choice" and then just contort their thinking and arguments to match the conclusion without really critically examining the counter arguments.

  • by earthboundkid on 7/11/23, 3:56 PM

    "We made a new kind of hammer. We're worried that bad people might use the hammer to build a house, but we're even more worried that the hammer might build a hammer factory and bash all our brains in."

    Okay, GLHF.

  • by kramerger on 7/11/23, 3:17 PM

  • by zeryx on 7/11/23, 3:17 PM

    Well written, but it side steps the X-risk and even the employment/ capitalism failures looming on the horizon.

    2 years ago I thought we were decades away from general purpose AI, this is coming from a guy who implemented transformer models on day 5. My time estimates have been proven very wrong.

    I'm equally worried about the value of white collar labour dropping to near zero in my lifetime, and the ultimate centralization of power. The movie Elysium seems less and less science fiction every day.

    I am happy politicians and think-tanks are taking this seriously, you should be too.

  • by stOneskull on 7/11/23, 4:19 PM

    elon musk said he gave 50 million dollars to get OpenAI started, wanting it to be non-profit and open-source, and they kinda betrayed him. i wonder if he would consider doing it again but making sure the organization stayed open-source and non-profit.
  • by arisAlexis on 7/11/23, 3:45 PM

    "What does it look like to embrace the belief in progress and the rationality of all humans when we respond to the threat of AI mis-use?"

    This is more utopian than communism even.

    "There will still be Bad Guys looking to use them to hurt others or unjustly enrich themselves. But most people are not Bad Guys"

    Failure to understand that even one by guy can kill everyone with a sufficiently advanced AGI