from Hacker News

Meta disbanded its Responsible AI team

by jo_beef on 11/19/23, 3:18 AM with 388 comments

  • by RcouF1uZ4gsC on 11/19/23, 4:23 AM

    Because Meta is releasing their models to the public, I consider them the most ethical company doing AI at scale.

    Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.

  • by seanhunter on 11/19/23, 7:25 AM

    It never made any organizational sense for me to have a "responsible AI team" in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing. Having that concentrated in a single team means that team becomes a bottleneck where they have to vet all AI work everyone else does for responsibility and/or everyone else gets a free pass to develop irresponsible AI which doesn't sound great to me.

    At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.

    For me this is exactly like how big Megacorps have an "Innovation team"[1] and convince themselves that makes them an innovative company. No - if you're an innovative company then you foster innovation everywhere. If you have an "innovation team" that's where innovation goes to die.

    [1] In my experience they make a "really cool" floor with couches and everyone thinks it's cool to draw on the glass walls of the conference rooms instead of whiteboards.

  • by happytiger on 11/19/23, 10:14 AM

    Early stage “technology ethics teams” are about optics and not reality.

    In the early stages of a new technology the core ethics lies in the hands of very small teams or often individuals.

    If those handling the core direction decide to unleash irresponsibly, it’s done. Significant harm can be done by one person dealing with weapons of mass destruction, chemical weapons, digital intelligence, etc.

    It’s not wrong to have these teams, but the truth is that anyone working with the technology needs to be treated like they are on an ethics team, not build an “ethical group” who’s supposed to proxy the responsibility for doing it the “right way.”

    Self-directed or self-aware AI also complicate this situation immeasurably, as having an ethics team presents a perfect target for a rogue AI or bad actor. You’re creating a “trusted group” with special authority for something/someone to corrupt. Not wise to create privileged attack surfaces when working with digital intelligences.

  • by MicolashKyoka on 11/19/23, 5:38 PM

    AI safety is just a rent seeking cult/circus, leeching on the work done by others. Good on Meta for cleaning shop.
  • by luigi23 on 11/19/23, 4:33 AM

    When moneys out and theres a fire going on (at openai), its the best moment to close departments that were solely for virtue signaling :/
  • by Geisterde on 11/19/23, 9:31 AM

    Completely absent a single example of what this team positively contributed. Perhaps we should look at a track record of the past few years and see how effective meta has been in upholding the truth, it doesnt look pretty.
  • by irusensei on 11/19/23, 3:25 PM

    Considering how costly is to train models I'm sure control freaks and rent seekers are probably salivating to dig their teeth into this but as technologies progress and opposing parts of the world get hold of this all this responsible and regulated feel good corpo crap bs will backfire.
  • by ralusek on 11/19/23, 9:29 AM

    There is no putting the cat back in the bag. The only defense against AI at this point is more powerful AI, and we just have to hope that:

    1.) there is an equilibrium that can be reached

    2.) the journey to and stabilizing at said equilibrium is compatible with human life

    I have a feeling that the swings of AI stabilizing among adversarial agents is going to happen at a scale of destruction that is very taxing on our civilizations.

    Think of it this way, every time there's a murder suicide or a mass shooting type thing, I basically write that off as "this individual is doing as much damage as they possibly could, with whatever they could reasonably get their hands on to do so." When you start getting some of these agents unlocked and accessible to these people, eventually you're going to start having people with no regard for the consequences requesting that their agents do things like try to knock out transformer stations and parts of the power grid; things of this nature. And the amount of mission critical things on unsecured networks, or using outdated cryptography, etc, all basically sitting there waiting, is staggering.

    For a human to even be able to probe this space means that they have to be pretty competent and are probably less nihilistic, detached, and destructive than your typical shooter type. Meanwhile, you get a reasonable agent in the hands of a shooter type, and they can be any midwit looking to wreak havoc on their way out.

    So I suspect we'll have a few of these incidents, and then the white hat adversarial AIs will come online in earnest, and they'll begin probing, themselves, and alerting to us to major vulnerabilities and maybe even fixing them. As I said, eventually this behavior will stabilize, but that doesn't mean that the blows dealt in this adversarial relationship don't carry the cost of thousands of human lives.

    And this is all within the subset of cases that are going to be "AI with nefarious motivations as directed by user(s)." This isn't even touching on scenarios in which an AI might be self motivated against our interests

  • by speedylight on 11/19/23, 4:36 AM

    I honestly believe the best to make AI responsibly is to make it open source. That way no single entity has total control over it, and researchers can study them to better understand how they can be used nefariously as well as in a good way—doing that allows us to build defenses to minimize the risks, and reap the benefits. Meta is already doing that, but other companies and organizations should do that as well.
  • by seydor on 11/19/23, 9:54 AM

    The responsibility of AI should lie in the hands of users, but right now , no company is even close to giving AI users the power to shape their product in responsible ways. The legal system already covers for these externalities, and all attempts at covering their ass have resulted to stupider and less useful systems.

    They are literally leaking more and more users to the open source models because of it. So, in retrospect, maybe it would be better if they didn't disband it.

  • by unicornmama on 11/19/23, 8:13 AM

    Meta cannot be both referee and player on the field. Responsible schmenponsible. True oversight can only come from a an independent entity.

    These internal committees are Kabuki theater.

  • by corethree on 11/19/23, 4:24 AM

    Safety for AI is like making safe bullets or safe swords or safe shotguns.

    The reason why there's so much emphasis on this is liability. That's it. Otherwise there's really no point.

    It's the psychological aspect of blame that influences the liability. If I wanted to make a dirty bomb it's harder to blame google for it if I found the results through google, easier to blame AI for it if I found the results from an LLM. Mainly because the data was transferred from the servers directly to me when it's an LLM. But the logical route of getting that information is essentially the same.

    So because of this companies like Meta (who really don't give a shit) spend so much time emphasizing on this safety bs. Now I'm not denigrating meta for not giving a shit, because I don't give a shit either.

    Kitchen knives can kill people folks. Nothing can stop it. And I don't give a shit about people designing safety into kitchen knives anymore than I give a shit about people designing safety into AI. Pointless.

  • by stainablesteel on 11/19/23, 2:27 PM

    i have no problem with this

    anyone who has a problem with this should have quantitatively MORE of a problem with the WHO removing "do no harm" from their guidelines. i would accept nothing less.

  • by xkcd1963 on 11/19/23, 10:00 AM

    Whoever actually buys into these pitiful showcase of morale for marketing purposes cant be helped. American companies are only looking for the profit, doesn't matter the cost.
  • by g96alqdm0x on 11/19/23, 5:26 AM

    How convenient! Turns out they don’t give the slightest damn about “Responsible AI” in the first place. It’s nice to roll out news like this while everyone else is distracted.
  • by jbirer on 11/19/23, 8:11 AM

    Looks like responsibility and ethics got in the way of profit.
  • by pelorat on 11/19/23, 9:53 AM

    Probably because it's a job anyone can do.
  • by karmasimida on 11/19/23, 7:59 AM

    Responsible AI should be team oriented in the first place, each project has very different security objective
  • by martin82 on 11/20/23, 2:51 AM

    If we ever had "responsible software" teams and they would actually have any power, companies like Meta, Google and Microsoft wouldn't even exist.

    So yeah... the whole idea of "responsible AI" is just wishful thinking at best and deceptive hypocrisy at worst.

  • by baby on 11/19/23, 6:50 AM

    I really really hate what we did to LLMs. We throttled it so much that it's not as useful as it used to be. I think everybody understands that the LLMs lie some % of the time, it's just dumb to censor them. Good move on Meta.
  • by tayo42 on 11/19/23, 8:18 AM

    It feels like to me the ai field is filled with Corp speak phrases that aren't clear at all? Alignment, responsible, safety etc. These aren't adjectives normal people use to describe things. What's up with this?
  • by readyplayernull on 11/19/23, 1:17 PM

    The only reason BigCo doesn't disband their legal team is because of laws.
  • by hypertele-Xii on 11/19/23, 12:29 PM

    Google removed "Don't be evil", so we know they do evil. Facebook disbaneded responsible AI team, so we know they do AI irresponsibly. I love greedy evil corporations telling on themselves.
  • by camdenlock on 11/19/23, 8:25 AM

    Such teams are just panicked by the idea that these models might not exclusively push their preferred ideology (critical social justice). We probably shouldn’t shed a tear for their disbandment.
  • by say_it_as_it_is on 11/19/23, 11:02 AM

    "Move slow and ask for permission to do things" evidently wasn't working out. This firing wasn't a call to start doing evil. It was too tedious a process.
  • by arisAlexis on 11/19/23, 4:47 PM

    of course, a guy with LeCunn's ego is perfect to destroy humanity along with Musk
  • by ITB on 11/19/23, 3:48 PM

    A lot of comparison between AI safety and legal or security teams. It doesn’t hold. Nobody really knows what it means to build a safe AI, so these teams can only resort to slowing down for slowing down sake. At least a legal team can make reference to real liabilities, and a security team can identify actual exposure.
  • by neverrroot on 11/19/23, 11:51 AM

    Timing is everything, the coup at OpenAI will have quite the impact
  • by dudeinjapan on 11/19/23, 11:45 AM

    …then they gave their Irresponsible AI team a big raise.
  • by doubloon on 11/19/23, 3:28 PM

    in other news, Wolves have disbanded their Sheep Safety team.
  • by amai on 11/19/23, 5:14 PM

    Capitalism is only responsible if that brings more money than being irresponsible.
  • by Simon_ORourke on 11/19/23, 7:41 AM

    AI is a tool, and there's about as much point as having some team fretting about responsible usage as there is having similar notions in a Bazooka manufacturer. Whoever ultimately owns the AI (or the Bazooka) will always dictate how and where the particular tool is used.

    Many of these AI Ethics foundations (e.g., DAIR), just seem to advocate rent seeking behavior, scraping out a role for themselves off the backs of others who do the actual technical (and indeed ethical) work. I'm sure the Meta Responsible AI team was staffed with similar semi-literate blowhards, all stance and no actual work.

  • by spangry on 11/19/23, 5:59 AM

    Does anyone know what this Responsible AI team did? Were they working on the AI alignment / control issue, or was it more about curtailing politically undesirable model outputs? I feel like the conflation of these two things is unfortunate because the latter will cause people to turn off the former. It's like a reverse motte and bailey.
  • by asylteltine on 11/19/23, 4:18 AM

    I’m okay with this. They mostly complained about nonsense or nonexistent problems. Maybe they can stop “aligning” their models now
  • by ryanjshaw on 11/19/23, 4:22 AM

    Seems like something that should exist as a specialist knowledge team within an existing compliance team i.e. guided by legal concerns primarily.
  • by 121789 on 11/19/23, 5:30 AM

    These types of teams never last long