by nickrubin on 11/18/23, 12:10 AM with 662 comments
by johnwheeler on 11/18/23, 12:31 AM
https://twitter.com/karaswisher/status/1725682088639119857
nothing to do with dishonesty. That’s just the official reason.
———-
I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.
Also interesting that Sutskever tweeted a month and a half ago
https://twitter.com/ilyasut/status/1707752576077176907
The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.
Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.
I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.
PS most engineers, like myself, are replaceable. Ilya is probably not.
by momofuku on 11/18/23, 4:47 AM
Update from Greg
by jumploops on 11/18/23, 1:09 AM
Sam claims LLMs aren't sufficient for AGI (rightfully so).
Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.
Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.
In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.
by robswc on 11/18/23, 1:03 AM
So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?
Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.
by DavidSJ on 11/18/23, 12:15 AM
by stolsvik on 11/18/23, 1:12 AM
by convexstrictly on 11/18/23, 3:23 AM
https://time.com/collection/time100-ai/6309033/greg-brockman...
by runjake on 11/18/23, 12:40 AM
Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.
by ldargin on 11/18/23, 12:30 AM
by Mentlo on 11/18/23, 5:17 AM
If indeed a similar disagreement happened in OpenAI but this time Hinton (Ilya) came on top- it’s a reason to celebrate.
by satvikpendem on 11/18/23, 7:07 AM
by singluere on 11/18/23, 6:02 AM
by maxdoop on 11/18/23, 12:20 AM
by Geee on 11/18/23, 12:51 AM
They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.
They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.
Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.
by lazzlazzlazz on 11/18/23, 4:28 AM
by bangalore on 11/18/23, 12:22 AM
by mrangle on 11/18/23, 2:59 PM
But right now, the board undoubtedly feels the most pressure in the realm of safety. This is where the political and big-money financial (Microsoft) support will be.
If all true, Altman's departure was likely inevitable as well as fortunate for his future.
by speedylight on 11/18/23, 1:42 AM
by babl-yc on 11/18/23, 12:16 AM
by crop_rotation on 11/18/23, 12:20 AM
by _ink_ on 11/18/23, 1:40 AM
by georgehill on 11/18/23, 12:18 AM
by modernpink on 11/18/23, 12:20 AM
by suggala on 11/19/23, 4:17 PM
And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible?
I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.
by andrewstuart on 11/18/23, 12:40 AM
Or Altman will start a competitor.
by kats on 11/18/23, 12:42 AM
by rrsp on 11/18/23, 12:48 AM
by heurist on 11/18/23, 12:16 AM
by fredgrott on 11/18/23, 1:35 PM
From my brief dealings with SA at Loopt in 2005, SA just does not have a dishonest bone in his body.(I got a brief look at the Loopt pitch deck due to interviewing for a mobile dev position at Loopt just after Sprint invested).
If you want an angel invest play, find out the new VCfund firm Sam is setting up for hard research.
by lexandstuff on 11/18/23, 12:29 AM
by ispyi on 11/18/23, 4:51 PM
by Zaheer on 11/18/23, 12:24 AM
by acyou on 11/18/23, 2:52 AM
by Kye on 11/18/23, 4:07 AM
by jumploops on 11/18/23, 12:16 AM
The next AI winter may have just begun...
by thekoma on 11/18/23, 10:32 AM
by duxup on 11/18/23, 1:01 AM
by SXX on 11/18/23, 1:58 AM
by mcenedella on 11/18/23, 12:48 AM
by RivieraKid on 11/18/23, 12:25 AM
by blibble on 11/18/23, 12:15 AM
by m101 on 11/18/23, 12:54 AM
https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
by bubmiw on 11/18/23, 1:01 AM
by smlacy on 11/18/23, 1:36 AM
by robswc on 11/18/23, 12:13 AM
by tmsh on 11/18/23, 12:22 AM
by wly_cdgr on 11/18/23, 2:50 AM
by grpt on 11/18/23, 12:19 AM
by otikik on 11/18/23, 12:50 AM
Which news is that?
by thornewolf on 11/18/23, 12:16 AM
by detolly on 11/18/23, 12:15 AM
by friendlynokill on 11/18/23, 12:22 AM
by minimaxir on 11/18/23, 12:30 AM
by next_xibalba on 11/18/23, 1:44 AM
I am genuinely flabbergasted as to how she ended up on the board. How does this happen?
I can't even find anything about fellow board member Tasha McCauley...
by og_kalu on 11/18/23, 12:20 AM
Really wonder what this is all about.
Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.
by layer8 on 11/18/23, 12:33 AM
by mexicanandre on 11/18/23, 1:04 AM
Both seem like they are horribly rushed and no auto complete?
by intellectronica on 11/18/23, 1:36 AM
by zzzeek on 11/18/23, 12:31 AM
by Waterluvian on 11/18/23, 12:19 AM
Is it some cute attempt at saying “an AI didn’t write this”?
by gumballindie on 11/18/23, 12:18 AM
The game is over boys. The only question is how to make these types of companies pay for the crimes committed.
by yieldcrv on 11/18/23, 12:17 AM
I’m never going back to Noe Valley for less than $500,000/yr and a netjets membership
by markus_zhang on 11/18/23, 12:55 AM
Like, who is Mira Murati? We only now that she came from Albania (one of the poorest countries) and somehow got into some pretty good private schools, and then to pretty good companies. Who are her parents? What kind of connections does she pull?
by woeirua on 11/18/23, 12:54 AM
by HissingMachine on 11/18/23, 12:29 AM