by jo_beef on 11/19/23, 3:18 AM with 388 comments
by RcouF1uZ4gsC on 11/19/23, 4:23 AM
Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.
by seanhunter on 11/19/23, 7:25 AM
At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.
For me this is exactly like how big Megacorps have an "Innovation team"[1] and convince themselves that makes them an innovative company. No - if you're an innovative company then you foster innovation everywhere. If you have an "innovation team" that's where innovation goes to die.
[1] In my experience they make a "really cool" floor with couches and everyone thinks it's cool to draw on the glass walls of the conference rooms instead of whiteboards.
by happytiger on 11/19/23, 10:14 AM
In the early stages of a new technology the core ethics lies in the hands of very small teams or often individuals.
If those handling the core direction decide to unleash irresponsibly, it’s done. Significant harm can be done by one person dealing with weapons of mass destruction, chemical weapons, digital intelligence, etc.
It’s not wrong to have these teams, but the truth is that anyone working with the technology needs to be treated like they are on an ethics team, not build an “ethical group” who’s supposed to proxy the responsibility for doing it the “right way.”
Self-directed or self-aware AI also complicate this situation immeasurably, as having an ethics team presents a perfect target for a rogue AI or bad actor. You’re creating a “trusted group” with special authority for something/someone to corrupt. Not wise to create privileged attack surfaces when working with digital intelligences.
by MicolashKyoka on 11/19/23, 5:38 PM
by luigi23 on 11/19/23, 4:33 AM
by Geisterde on 11/19/23, 9:31 AM
by irusensei on 11/19/23, 3:25 PM
by ralusek on 11/19/23, 9:29 AM
1.) there is an equilibrium that can be reached
2.) the journey to and stabilizing at said equilibrium is compatible with human life
I have a feeling that the swings of AI stabilizing among adversarial agents is going to happen at a scale of destruction that is very taxing on our civilizations.
Think of it this way, every time there's a murder suicide or a mass shooting type thing, I basically write that off as "this individual is doing as much damage as they possibly could, with whatever they could reasonably get their hands on to do so." When you start getting some of these agents unlocked and accessible to these people, eventually you're going to start having people with no regard for the consequences requesting that their agents do things like try to knock out transformer stations and parts of the power grid; things of this nature. And the amount of mission critical things on unsecured networks, or using outdated cryptography, etc, all basically sitting there waiting, is staggering.
For a human to even be able to probe this space means that they have to be pretty competent and are probably less nihilistic, detached, and destructive than your typical shooter type. Meanwhile, you get a reasonable agent in the hands of a shooter type, and they can be any midwit looking to wreak havoc on their way out.
So I suspect we'll have a few of these incidents, and then the white hat adversarial AIs will come online in earnest, and they'll begin probing, themselves, and alerting to us to major vulnerabilities and maybe even fixing them. As I said, eventually this behavior will stabilize, but that doesn't mean that the blows dealt in this adversarial relationship don't carry the cost of thousands of human lives.
And this is all within the subset of cases that are going to be "AI with nefarious motivations as directed by user(s)." This isn't even touching on scenarios in which an AI might be self motivated against our interests
by speedylight on 11/19/23, 4:36 AM
by seydor on 11/19/23, 9:54 AM
They are literally leaking more and more users to the open source models because of it. So, in retrospect, maybe it would be better if they didn't disband it.
by unicornmama on 11/19/23, 8:13 AM
These internal committees are Kabuki theater.
by corethree on 11/19/23, 4:24 AM
The reason why there's so much emphasis on this is liability. That's it. Otherwise there's really no point.
It's the psychological aspect of blame that influences the liability. If I wanted to make a dirty bomb it's harder to blame google for it if I found the results through google, easier to blame AI for it if I found the results from an LLM. Mainly because the data was transferred from the servers directly to me when it's an LLM. But the logical route of getting that information is essentially the same.
So because of this companies like Meta (who really don't give a shit) spend so much time emphasizing on this safety bs. Now I'm not denigrating meta for not giving a shit, because I don't give a shit either.
Kitchen knives can kill people folks. Nothing can stop it. And I don't give a shit about people designing safety into kitchen knives anymore than I give a shit about people designing safety into AI. Pointless.
by stainablesteel on 11/19/23, 2:27 PM
anyone who has a problem with this should have quantitatively MORE of a problem with the WHO removing "do no harm" from their guidelines. i would accept nothing less.
by xkcd1963 on 11/19/23, 10:00 AM
by g96alqdm0x on 11/19/23, 5:26 AM
by jbirer on 11/19/23, 8:11 AM
by pelorat on 11/19/23, 9:53 AM
by karmasimida on 11/19/23, 7:59 AM
by martin82 on 11/20/23, 2:51 AM
So yeah... the whole idea of "responsible AI" is just wishful thinking at best and deceptive hypocrisy at worst.
by baby on 11/19/23, 6:50 AM
by tayo42 on 11/19/23, 8:18 AM
by readyplayernull on 11/19/23, 1:17 PM
by hypertele-Xii on 11/19/23, 12:29 PM
by camdenlock on 11/19/23, 8:25 AM
by say_it_as_it_is on 11/19/23, 11:02 AM
by arisAlexis on 11/19/23, 4:47 PM
by ITB on 11/19/23, 3:48 PM
by neverrroot on 11/19/23, 11:51 AM
by dudeinjapan on 11/19/23, 11:45 AM
by doubloon on 11/19/23, 3:28 PM
by amai on 11/19/23, 5:14 PM
by Simon_ORourke on 11/19/23, 7:41 AM
Many of these AI Ethics foundations (e.g., DAIR), just seem to advocate rent seeking behavior, scraping out a role for themselves off the backs of others who do the actual technical (and indeed ethical) work. I'm sure the Meta Responsible AI team was staffed with similar semi-literate blowhards, all stance and no actual work.
by spangry on 11/19/23, 5:59 AM
by asylteltine on 11/19/23, 4:18 AM
by ryanjshaw on 11/19/23, 4:22 AM
by 121789 on 11/19/23, 5:30 AM