by arvindh-manian on 12/13/24, 7:36 PM with 640 comments
by lsy on 12/13/24, 8:14 PM
The board of OpenAI is supposedly going to "determine the fate of the world", robotics to be "completely solved" by 2020, the goal of OpenAI is to "avoid an AGI dictatorship".
Is nobody in these very rich guys' spheres pushing back on their thought process? So far we are multiple years in with much investment and little return, and no obvious large-scale product-market fit, much less a superintelligence.
As a bonus, they lay out the OpenAI business model:
> Our fundraising conversations show that:
> * Ilya and I are able to convince reputable people that AGI can really happen in the next ≤10 years
> * There’s appetite for donations from those people
> * There’s very large appetite for investments from those people
by factorialboy on 12/13/24, 8:01 PM
by AlanYx on 12/13/24, 8:05 PM
"On one call, Elon told us he didn’t care about equity personally but just needed to accumulate $80B for a city on Mars."
by tonygiorgio on 12/13/24, 9:54 PM
by milleramp on 12/13/24, 8:17 PM
by exprofmaddy on 12/13/24, 9:34 PM
by roca on 12/14/24, 1:29 AM
by freedomben on 12/13/24, 8:01 PM
Isn't that exactly what he's doing with x.ai? Grok and all that? IIRC Elon has the biggest GPU compute cluster in the world right now, and is currently training the next major version of his "competing in the marketplace" product. It will be interesting to see how this blog post ages.
I'm not dismissing the rest of the post (and indeed I think they make a good case on Elon's hypocrisy!) but the above seems at best like a pretty massive blindspot which (if I were invested in OpenAI) would cause me some concern.
by meagher on 12/13/24, 8:26 PM
by Alifatisk on 12/14/24, 11:26 AM
by linotype on 12/13/24, 10:02 PM
by agnosticmantis on 12/13/24, 10:59 PM
In 2017 he wrote
"Within the next three years, robotics should be completely solved, AI should solve a long-standing unproven theorem, programming competitions should be won consistently by AIs, and there should be convincing chatbots (though no one should pass the Turing test)."
"We will completely solve the problem of adversarial examples by the end of August."
Very clever to take a page from Musk's own playbook of confidently promising self-driving by next year for a decade.
by dogboat on 12/13/24, 9:03 PM
by 23B1 on 12/13/24, 9:26 PM
by thih9 on 12/14/24, 6:42 AM
Why the rejection in 2017 when in 2024 the company moved towards a similar goal?
by swyx on 12/13/24, 11:35 PM
> The HER algorithm (https://www.youtube.com/watch?v=Dz_HuzgMzxo) can learn to solve many low-dimensional robotics tasks that were previously unsolvable very rapidly. It is non-obvious, simple, and effective.
> In 6 months, we will accomplish at least one of: single-handed Rubik’s cube, pen spinning (https://www.youtube.com/watch?v=dDavyRnEPrI), Chinese balls spinning (https://www.youtube.com/watch?v=M9N1duIl4Fc) using the HER algorithm
taken down now. anyone have it?
> Lock down an overwhelming hardware advantage. The 4-chip card that <redacted> says he can build in 2 years is effectively TPU 3.0 and (given enough quantity) would allow us to be on an almost equal footing with Google on compute.
who is this? it isnt cerebras. sambanova?
by ronsor on 12/13/24, 7:49 PM
by vwkd on 12/13/24, 8:57 PM
Otherwise, why would they engage in a publicity battle to sway public sentiment precisely now, if their legal case wasn't weak?
by tuyguntn on 12/13/24, 7:58 PM
by hermannj314 on 12/13/24, 8:03 PM
At some point, OpenAI became who they said they werent. You can't even get chatgpt to do anything fun anymore as the lawyers have hobbled its creative expression.
And now they want to fight with Elon over what they supposedly used to believe about money back in 2017.
Who actually deserves our support going forward?
by InfiniteVortex on 12/13/24, 8:01 PM
by WhatsName on 12/13/24, 7:59 PM
For me this rhymes with recent history...
by OrangeMusic on 12/20/24, 5:20 PM
by ks2048 on 12/14/24, 12:59 AM
I think somehow related to AI companies viewing the web as valuable data to be stolen if you don’t have it and protected property if you do have it.
by kelseyfrog on 12/13/24, 8:44 PM
by Uptrenda on 12/13/24, 11:55 PM
by agnosticmantis on 12/13/24, 9:48 PM
"Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach."
Genuine question I've always had is, are these charlatans conscious of how full of shit they are, or are they really high on their own stuff?
Also it grinds my gears when they pull out probabilities out of their asses:
"The probability of DeepMind creating a deep mind increases every year. Maybe it doesn’t get past 50% in 2 to 3 years, but it likely moves past 10%. That doesn’t sound crazy to me, given their resources."
by rendang on 12/14/24, 5:58 AM
The biggest miss from Elon seems to be that he overestimated Google's lead. Did Google really drop the ball on staying in 1st place and if so why?
by ouraf on 12/15/24, 1:24 PM
by Biologist123 on 12/13/24, 9:50 PM
by leonmusk on 12/13/24, 7:50 PM
by VectorLock on 12/13/24, 10:46 PM
by HeavyStorm on 12/15/24, 3:06 AM
by Havoc on 12/13/24, 8:00 PM
Musk appears to be objecting to a structure having profit driven players ("YC stock along with a salary") directly in the nonprofit...and is suggestion moving it to a parallel structure.
That's seems like a perfectly valid and frankly ethically/governance sound point to raise. The fact that he mentions incentives specifically to me suggests he was going down that line of reasoning outlined above.
Framing that as "Elon Musk wanted an OpenAI for-profit"...idk...maybe I'm missing something here, but dishonest framing is definitely the word that comes to mind.
by bloomingkales on 12/14/24, 5:58 AM
by WhereIsTheTruth on 12/14/24, 6:39 AM
by dgrin91 on 12/13/24, 9:27 PM
> I have considered the ICO approach and will not support it.
...
> I respect your decision on the ICO idea
Pretty sure they aren't talking about Initial Coin Offerings. Any clue what they mean?
by agnosticmantis on 12/14/24, 12:54 AM
"Our biggest tool is the moral high ground. To retain this, we must: Try our best to remain a non-profit. AI is going to shake up the fabric of society, and our fiduciary duty should be to humanity."
Well, reading this in 2024 with (so-called) "Open"AI going for-profit, it aged like milk.
Also a few lines later, he writes:
"We don’t encourage paper writing, and so paper acceptance isn’t a measure we optimize."
So much for openness and moral high ground!
This whole thread is a masterpiece in dishonesty, hypocrisy and narcissistic power plays for any wannabe villains.
It's amusing to see they keep their masks on even in internal communications though. I'd have thought the messiah complex and benevolence parade is only for the public, but I was wrong.
by iamnotsure on 12/14/24, 2:21 AM
by nothrowaways on 12/13/24, 11:48 PM
by m463 on 12/13/24, 7:52 PM
by WiSaGaN on 12/14/24, 12:02 AM
by tuyguntn on 12/13/24, 8:10 PM
> Put increasing effort into the safety/control problem
... and we are working to get defense contracts which is used to kill human beings in other countries, or fund organizations who kill humans
by dtquad on 12/13/24, 9:26 PM
This is why OpenAI and Sam Altman are understandably concerned.
by moralestapia on 12/13/24, 7:53 PM
The part that raises eyebrows is how a non-profit suddenly become a for-profit, from a legal standpoint.
by taosx on 12/14/24, 6:10 AM
I'm surprised by OpenAI's resilience through all this drama. It's impressive to see how far they've come from 2017 to 2024. This journey has given me a whole new appreciation for startups and the individuals behind them. Now I better understand why my own past mediocre attempts didn't succeed.
Thank you for sharing this information!
by pikseladam on 12/16/24, 5:42 AM
by jcrash on 12/13/24, 8:10 PM
one of the few youtube links on this page that is still up
by unglaublich on 12/13/24, 7:56 PM
by akira2501 on 12/13/24, 7:56 PM
by MaxGripe on 12/13/24, 11:24 PM
by catigula on 12/13/24, 8:15 PM
by mediumsmart on 12/13/24, 7:51 PM
by andrewinardeer on 12/13/24, 10:33 PM
by boringg on 12/13/24, 9:29 PM
Altman being the regulatory capture man muscling out competitors via pushing the white house and washington to move for safety, the whole board debacle and converting from not for profit to profit.
I don't think anyone sees Musks efforts as altruistic.
by averageRoyalty on 12/13/24, 9:34 PM
I don't know when your autumn ("fall") or summer are in relation to September. Don't mix keys here, either use months or quarters, not a mix of things including some relative to a specific hemisphere.