from Hacker News

OpenAI cofounder: “open-sourcing Al is just not wise”

by luigi23 on 3/16/23, 11:13 PM with 29 comments

  • by davesque on 3/17/23, 2:18 AM

    How does someone as smart as Sutskever come to believe we can possibly keep a lid on this knowledge? I can't imagine GPT-4 is doing anything much different than what has already been described in publicly available research. Also, does he think that all the engineers that built GPT-4 will work for OpenAI forever? AI algorithms are not like nuclear bombs that take a lot of effort to manufacture.

    My only explanation for this kind of talk is that Sutskever has allowed his own financial interests to either directly or indirectly affect his thinking.

    I have similar views on the risks of AGI but I can't see how on earth we'll prevent its further development. There are plenty of smart AI researchers who don't work for OpenAI. Oh and remind me again just exactly where OpenAI would be without "Attention is All You Need."

  • by AnEro on 3/17/23, 12:55 AM

    "The only competition that should be allowed is big players, small businesses should always be dependent on at most one or three of the largest tech companies. Also unrelated we could be doing copyright infringement and let's not enable people to check."
  • by armchairhacker on 3/17/23, 1:27 AM

    He's not wrong. Just like AAA game developers aren't releasing free games and $100b studios aren't releasing free movies. Developer tools all being open-source software is the exception not the rule.

    And the reason developer tools are all open-source is because for many of them, if you don't open-source some other talented group will just develop and release their own version. There are many individuals - individuals - who in <1 year can make a half-decent IDE or version control or programming language or network utility etc. And yeah it won't be as good as IntelliJ or Git, but give it a few years and hundreds of contributors and it will be - look at Emacs, Git, Linux, Rust, SerenityOS, etc.

    But an individual cannot train an AI that costs $1b+ and huge amounts of training data unless they're a billionaire. And until that changes, its only natural that those who can will keep at least some of the resulting model to themselves. My hope is that:

    - A very large group of developers unite in LAION or possibly a different org. Unfortunately idk how effective this will be as $1b is more than many people's lifetime income.

    - A nation's people vote their government to develop and release fully open AI. That would be amazing, and I'm sure a government has the resources, but I see an entire population cooperating on this as less likely than a very large but not necessarily majority group getting enough resources themselves.

    - Something like Stable Diffusion, which makes individuals developing powerful AI possible. That's possibly the most likely scenario, but given how much information and compute these AI need to be "powerful", idk how feasible this is. Even SD needs a smaller corporation to release the weights

  • by georgehill on 3/16/23, 11:57 PM

    I wonder what would have happened to OpenAI if Google had not released the transformers paper.
  • by Awelton on 3/19/23, 3:14 AM

    They don't give a shit about the dangers of AGI, they just have a lead and don't want to lose it. I can see why they would want to, but if you are going to do it then own it. They didn't mind being an open source nonprofit when it meant they were getting millions of dollars in donations and blowing through money researching, but now that they have a shippable product that was built on all that money they just happen to decide that they are the only ones worthy of being responsible for all that power? Give me a break.
  • by WheelsAtLarge on 3/16/23, 11:46 PM

    I agree. AIs tool should not be open source but the organization behind it should definitely be a non-profit that's created for the public good and it's run as such. It can't be a for profit entity that's focused on making a buck.
  • by roberttod on 3/17/23, 1:51 AM

    Given a few conditions I completely agree with this sentiment.

    If this does indeed become world-changing, leading to AGI, then the best hope we have is that those with all the power are ethical. Giving it to everyone could be chaos. Giving an advantage to other superpowers would be terrifying.

    Not ideal, but in their position I can't think of a better alternative. I am not saying I want anyone to have AGI btw, just that this is going to happen eventually (perhaps not via OpenAI/GPT) and getting there first may be game over for all other parties.

  • by pk-protect-ai on 3/17/23, 9:20 AM

    This position pose more danger to humanity then the AI or AGI can pose in foreseeable future. We are at a paradigm change, and OpenAI tries to accumulate as much power as possible doing an unethical thing that is against their previous motto. At the same time Microsoft lays off AI ethics team ... Bing with GPT-4 is already able to look into your sharepoint data all over. I see here very dangerous tendency.
  • by sacnoradhq on 3/17/23, 3:40 AM

    It's going to be more like a pervasive utility. Once upon a time, there were cassette tapes and Dolby NR. Lots of tiny licensing fees for all the goes into the secret sauce.

    https://en.wikipedia.org/wiki/Dolby_noise-reduction_system

  • by splatzone on 3/17/23, 2:37 AM

    Being completely honest, and with no disrespect meant, I'm not sure I trust American capitalists to handle this technology responsibly more than I'd trust any random person
  • by insomagent on 3/17/23, 5:39 AM

    Then let's change your company's name FFS.
  • by redeeman on 3/17/23, 10:46 AM

    yeah, it'd be a real shame if someone were to run a LLM that wasnt gimped by their own special blend of wokeism :)
  • by dack on 3/17/23, 7:37 PM

    I think AGI is going to be incredibly dangerous whether or not they open source their research.