by redwoolf on 10/5/24, 12:17 PM with 576 comments
by lolinder on 10/5/24, 2:06 PM
* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.
* They dissolved the safety team.
* They switched to for profit and are poised to give Altman equity.
* All while hyping AGI more than ever.
All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.
None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.
by simpaticoder on 10/5/24, 2:01 PM
How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.
It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.
by austinkhale on 10/5/24, 1:51 PM
The arguments are essentially:
1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.
2. Sam _only_ has a record as a deal maker, not a physicist.
3. AI can sometimes do bad things & utilizes a lot of energy.
I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.
by deepsquirrelnet on 10/5/24, 1:38 PM
I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.
What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?
by goles on 10/5/24, 1:19 PM
by thruway516 on 10/5/24, 2:58 PM
Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?
by rubyfan on 10/5/24, 2:14 PM
The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.
by thelastgallon on 10/5/24, 1:48 PM
Reality: AI needs unheard amounts of energy. This will make climate significantly worse.
by hdivider on 10/5/24, 5:31 PM
Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.
Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.
Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.
[1]: https://www.llnl.gov/article/49911/high-performance-computin...
by vasilipupkin on 10/5/24, 2:07 PM
now you can use AI to easily write the type of articles he produces and he's pissed.
by bambax on 10/5/24, 2:51 PM
It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.
by nabla9 on 10/5/24, 3:32 PM
Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.
OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.
by jacknews on 10/5/24, 2:38 PM
The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.
If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.
Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.
by razodactyl on 10/5/24, 7:10 PM
I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.
In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?
Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?
by mppm on 10/5/24, 2:58 PM
So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?
by KaoruAoiShiho on 10/5/24, 2:15 PM
by throwintothesea on 10/5/24, 3:23 PM
by theptip on 10/5/24, 3:30 PM
Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.
If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.
The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.
by melenaboija on 10/5/24, 1:58 PM
Not saying it is a bubble but something seems imbalanced here.
by _hyn3 on 10/5/24, 8:18 PM
Hilarious.
by make3 on 10/6/24, 11:52 PM
The progress that we've seen in the past two years has been completely insane compared to literally any other field. LLMs complete absolutely insane reasoning tasks, including math proofs at the level of a "mediocre grad student" (which is super impressive). For better or worse, image generation & now video generation is indistinguishable from the real thing, a lot of the times.
I think that crazy business types and media really overhyped the fuck out of AI so fucking high, that even with such strong progress, it's still not enough.
by AndrewKemendo on 10/5/24, 3:06 PM
AGI cannot exist in a box that you can control. We figured that out 20 years ago.
Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts
by photochemsyn on 10/5/24, 2:01 PM
This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.
The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.
by breck on 10/5/24, 5:19 PM
During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)
This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.
Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.
by angarg12 on 10/5/24, 7:07 PM
But that's not how the market works.
by twodave on 10/5/24, 4:32 PM
I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).
by 1vuio0pswjnm7 on 10/5/24, 9:03 PM
Old tactic.
The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.
Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.
Will the tactic ever stop working. Who knows.
Focus on the future that no one can predict, not the present that anyone can describe.
by ein0p on 10/5/24, 6:52 PM
by bhouston on 10/5/24, 3:05 PM
The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.
But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.
If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.
by tim333 on 10/5/24, 10:49 PM
Is kind of a boring way of looking at things. I mean we have fairly good chatbots and image generators now but it's where the future is going that's the interesting bit.
Lumping AI in with dot coms and crypto seems a bit silly. It's a different category of thing.
(By the way Sam being shifty or not techy or not seems kind of incidental to it all.)
by rpgbr on 10/5/24, 5:07 PM
by luxuryballs on 10/5/24, 5:41 PM
They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.
by tightbookkeeper on 10/5/24, 3:27 PM
by whamlastxmas on 10/5/24, 4:35 PM
by kthejoker2 on 10/6/24, 3:07 AM
It's funny we coach people not to ascribe human characteristics to LLMS..
But we seem equally capable of denying the very human characteristics in our would be overlords.
Which warlord will we canonize next?
by wicndhjfdn on 10/5/24, 1:53 PM
by hnadhdthrow123 on 10/5/24, 1:54 PM
by rsynnott on 10/5/24, 2:50 PM
by mark_l_watson on 10/5/24, 4:32 PM
But, but, but… their drama, or Altman’s drama is now too much for me, personally.
With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).
I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.
by flenserboy on 10/5/24, 1:57 PM
by ortusdux on 10/5/24, 5:57 PM
by yumraj on 10/5/24, 4:45 PM
So close, yet so far. And, both help the respective CEOs in hyping the respective companies.
by latexr on 10/5/24, 2:12 PM
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
by wnevets on 10/5/24, 5:52 PM
by est on 10/5/24, 2:04 PM
by cowmix on 10/5/24, 4:08 PM
First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?
Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.
And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.
by msephton on 10/6/24, 10:57 AM
by xyst on 10/5/24, 3:32 PM
by wg0 on 10/5/24, 3:55 PM
From robotics, neurology, transport to everything in between - not a word should be taken as is.
by m3kw9 on 10/5/24, 2:59 PM
by thwg on 10/5/24, 1:50 PM
by _davide_ on 10/5/24, 8:13 PM
by euphetar on 10/5/24, 7:30 PM
Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?
by dmitrygr on 10/5/24, 4:09 PM
by swiftcoder on 10/5/24, 6:40 PM
But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.
by richrichie on 10/5/24, 2:33 PM
by klabb3 on 10/5/24, 2:14 PM
Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)
> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.
by 7e on 10/5/24, 6:10 PM
by fnordpiglet on 10/5/24, 1:53 PM
The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.
When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.
But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.
by nottorp on 10/5/24, 2:18 PM
by throwaway918299 on 10/5/24, 4:12 PM
by cyanydeez on 10/5/24, 1:23 PM
See same with Elon Musk.
Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs
by slenk on 10/5/24, 4:40 PM
by nomilk on 10/5/24, 2:09 PM
The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.
> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.
What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?
What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!
by krick on 10/5/24, 4:38 PM
As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.
by EchoReflection on 10/5/24, 6:55 PM
https://www.betterworldbooks.com/product/detail/the-sociopat...
by DemocracyFTW2 on 10/5/24, 1:08 PM
/s
by kopirgan on 10/5/24, 3:05 PM
by twelve40 on 10/5/24, 1:48 PM
This is a good reminder:
> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution
In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!
by neuroelectron on 10/5/24, 8:34 PM
Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.
by whoiscroberts on 10/5/24, 2:58 PM
by mrangle on 10/5/24, 3:18 PM
Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.
I say that as someone who would be deemed to be an Ai pessimist, by many.
But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.
by bediger4000 on 10/5/24, 1:17 PM
I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.
by m3kw9 on 10/5/24, 2:57 PM