by firloop on 6/10/25, 9:17 PM with 527 comments
by grafmax on 6/11/25, 1:17 PM
> There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before
Real wages haven’t risen since 1980. Wealth inequality has. Most people have much less political power than they used to as wealth - and thus power - have become concentrated. Today we have smartphones, but also algorithm-driven polarization and a worldwide rise in authoritarian leaders. Depression and anxiety affect roughly 30% of our population.
The rise of wealth inequality and the stagnation of wages corresponds to the collapse of the labor movement under globalization. Without a counterbalancing force from workers, wealth accrues to the business class. Technological advances have improved our lives in some ways but not on balance.
So if we look at people’s well-being, society as whole hasn’t progressed since the 1980s; in many ways it’s gotten worse. Thus the trajectory of progress described in the blog post is make believe. The utopia Altman describes won’t appear. Mass layoffs, if they happen, will further concentrate wealth. AI technology will be used more and more for mass surveillance, algorithmic decision making (that would make Kafka blush), and cost cutting.
What we can realistically expect is lowering of quality of life, an increased shift to precarious work, further concentration of wealth and power, and increasing rates of suffering.
What we need instead of science fiction is to rebuild the labor movement. Otherwise “value creation” and technology’s benefits will continue to accrue to a dwindling fraction of society. And more and more it will be at everyone else’s expense.
by flessner on 6/11/25, 5:10 AM
It was probably around 7 years ago when I first got interested in machine learning. Back then I followed a crude YouTube tutorial which consisted of downloading a Reddit comment dump and training an ML model on it to predict the next character for a given input. It was magical.
I always see LLMs as an evolution of that. Instead of the next character, it's now the next token. Instead of GBs of Reddit comments, it's now TBs of "everything". Instead of millions of parameters, it's now billions of parameters.
Over the years, the magic was never lost on me. However, I can never see LLMs as more than a "token prediction machine". Maybe throwing more compute and data at it will at some point make it so great that it's worthy to be called "AGI" anyway? I don't know.
Well anyway, thanks for the nostalgia trip on my birthday! I don't entirely share the same optimism - but I guess optimism is a necessary trait for a CEO, isn't it?
by daxfohl on 6/11/25, 3:58 PM
If the "mistake" is that of concentrating too much power in too few hands, there's no recovery. Those with the willingness to adapt will not have the power to do so, and those with the power to adapt will not have the willingness. And it feels like we're halfway there. How do we establish a system of checks and balances to avoid this?
by rcarmo on 6/11/25, 6:30 AM
by wolecki on 6/10/25, 10:21 PM
>Intelligence too cheap to meter is well within grasp
And also:
>cost of intelligence should eventually converge to near the cost of electricity.
Which is a meter-worthy resource. So intelligence effect on people's lives is in the order of magnitude of one second of a toaster use each day, in present value. This begs the question: what could you do with a toaster-second say 5 years from today?
by TheAceOfHearts on 6/11/25, 12:16 AM
More broadly, I wonder how many key insights he thinks are actually missing for AGI or ASI. This article suggests that we've already cleared the major hurdles, but I think there are still some major keys missing. Overall his predictions seem like fairly safe bets, but they don't necessarily suggest superintelligence as I expect most people would define the term.
by Amekedl on 6/12/25, 10:27 AM
It'll govern our choices, shape our realities, and enforce its creators' priorities under the guise of objective, superior intelligence. This 'superintelligence' won't be a benevolent oracle, but a sophisticated puppet – its strings hidden behind layers of complexity and marketing hype. Decisions impacting lives, resources, and freedoms will be made by algorithms fundamentally skewed by corporate agendas, dressed up as inevitable, logical conclusions.
The danger isn't just any bias; it's the institutionalization of bias on a massive scale, presented as progress.
We'll be told the system 'optimized' for efficiency or profit, mistaking corporate self-interest for genuine intelligence, while dissent gets labeled as irrationality against the machine's 'perfect' logic. The conceit lies in believing their engineered tool is truly autonomous wisdom, when it's merely power automated and legitimized by a buzzword. AI LETS GOOOOOOOOOOOOO
by physix on 6/10/25, 11:20 PM
by daxfohl on 6/10/25, 10:20 PM
Famous last words.
by BjoernKW on 6/10/25, 10:36 PM
by kristjank on 6/10/25, 10:09 PM
Between this and Ed Zitron at the other end of the spectrum, Ed's a lot more believeable to be honest.
by k2xl on 6/10/25, 9:28 PM
> 2026 will likely see the arrival of systems that can figure out novel insights
Interesting the level of confidence compared to recent comments by Sundar [1]. Satya [2] also is a bit more reserved in his optimism.
[1] https://www.windowscentral.com/software-apps/google-ceo-agi-...
[2] https://www.tomshardware.com/tech-industry/artificial-intell...
by minimaxir on 6/10/25, 10:01 PM
This isn't correct: people want good software and good art, and the current trajectory of how LLMs are used on average in the real world unfortunately run counter to that. This post doesn't offer any forecasts on the hallucination issues of LLMs.
> As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours
This is the first time a number has been given in terms of ChatGPT's cost-per-query and is obviously much lower than the 3 watts still cited by detractors, but there's a lot of asterisks in how that number might be calculated (is watt-hours the right unit of measurement here?).
by Caelus9 on 6/11/25, 4:27 AM
by muglug on 6/10/25, 10:17 PM
I see a similar thing at work — the number of projects a developer can get through isn’t bounded by the lines of code they can churn out in a day. Instead it’s bounded by their appetite for getting shouted at when something goes wrong.
Until you can shout at an LLM, I don’t think they’re going to replace humans in the workplace.
by stared on 6/10/25, 10:13 PM
Gary has invested heavily in an anti-AI persona, continually forecasting AI Winters despite wave after wave of breakthroughs.
Sam, on the other hand, is not just an AI enthusiast; he speaks in a manner designed to build the brand, influence policy, and continuously boost OpenAI's valuation and consolidate its power. It's akin to asking the Pope whether Catholicism is true.
Of course, there might indeed be significant roadblocks ahead. It’s also possible that OpenAI might outpace its competitors—although, as of now, Gemini 2.5 Pro holds the lead. Nevertheless, whenever we listen to highly biased figures, we should always take their claims with a grain of salt.
by patchorang on 6/11/25, 12:26 PM
It left off ingredients. The very gentle singularity…
by colesantiago on 6/10/25, 9:23 PM
So when AGI comes, I am curious what the new jobs are?
I see that prompt engineer is one of the jobs created because it's the way to ask a LLM certain tasks, but now AI can do this too.
I'm thinking that any new jobs AI would make, AI would just take them anyway.
Are there new jobs coming from this abundance that is on the horizon?
by afavour on 6/10/25, 10:01 PM
I heard similar things in my college dorm, amid all the hazy smoke.
It’s very difficult to take this stuff seriously. It’s like the initial hype around self driving cars wound up by 1000x. Because we got from 1 to 100 of course we’ll get from 100 to 200 in the same amount of time. Or less! Why would you even question it?
by abalaji on 6/10/25, 9:31 PM
Does anyone know if there are well established scaling laws for reasoning models similar to chinchilla scaling. (i.e. is the above claim valid?)
by justlikereddit on 6/11/25, 9:52 AM
Like a storefront advertising "live your wildest dreams" in pink neon. A slightly obese Mediterranean man with questionable taste tries to get you into his fine establishment. And if you do enter the first thing that meets you is an interior with a little bit too many stains and a smell of cum.
That's the vibe I get whenever Sam goes on his bi-quarterly AGI hype spree.
by hazbot on 6/11/25, 12:30 PM
by cs702 on 6/11/25, 12:13 PM
On the other hand, we may need more practical/theoretical breakthroughs to be able to build AI models that are reliable and precise, so they stop making up stuff "whenever they feel like it." Unfortunately, the timing of breakthroughs is not predictable. Maybe it will take months. Maybe it will take a decade. No one knows for sure.
by ldsjfldsjf on 6/11/25, 5:23 AM
Things like when to create an ugly hack because the perfect solution may result in in your existing customers moving over to your competitor. When to remove some tech debt and when to add to tech debt.
When to do a soft delete vs when to do a purge. These things are learnt when a customer shouts at you and you realize that you may be most intelligent kid on the block but it wont really help the customer tonight as the code is already deployed and your production deployment means a maintenance window.
by ben_w on 6/11/25, 11:52 AM
My blog posts didn't age all that well, and I've learned to be a little more sceptical about the speed of technological change, just as the political events over the intervening years have made me more aware of how fast political realities can change: https://benwheatley.github.io/blog/2016/04/12-00.31.55.html and https://benwheatley.github.io/blog/2022/09/20-18.35.10.html
* at least until the rate of change of curvature gets so high you're spaghetti, you're (approximately) co-moving with the light from your own body. This means that when you cross the event horizon, you still see your own legs, even though the space the light is in is moving towards the singularity faster than the light itself moves through that space: https://youtu.be/4rTv9wvvat8?feature=shared&t=516
by Teever on 6/11/25, 5:35 AM
> If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
It's really cool to hear a public figure seriously talk about self-replicating machines. To me this is the key to unlocking human potential and ending material scarcity.
If you owned a pair of robots that with sufficient spare parts could repair each other and do other useful work you could effectively do anything by using them to do all the things necessary to build copies of themselves.
Once we as a species have exponential growth on that we can do anything. Clean energy, Carbon sequestration, Von Neumann probes, asteroid mining, O'Neill Cylinders, it's all possible.
by candlemas on 6/10/25, 10:52 PM
by michaelbarton on 6/11/25, 12:35 AM
I enjoy technology but less and less so each year, because it increasingly feels like there’s some kind of disconnect with the real world that’s hard to put my finger on
by woopsn on 6/11/25, 12:19 AM
by jfernandez on 6/10/25, 10:14 PM
by lubujackson on 6/11/25, 12:19 AM
I get the gist of what he is saying, but I really think that most of the "idea guys" who never got farther than an idea will stay that way. Sure, they might spit out a demo or something, but from what I've seen the "idea guys" tend to be the "I want to play business" guys who have read all the top books and refine their Powerpoints but never actually seem to get around to, you know, running a business. I think there is underlying difference there.
I do see AI as a great accelerator. Just as scripting languages suddenly unlocked some designers who could make great websites but couldn't hang with pointers and malloc, I think AI will unlock great idea guys who can make great apps or businesses. But it will be fewer people than you think, because "building the app" is rarely the biggest challenge - focused intent is much harder to come by.
I do think the age of decently robust apps getting shat out like Flash games is going to be fun and chaotic, and I am 100% here for it.
by pacificmaelstrm on 6/11/25, 1:15 PM
Oh right, Sam Altman.
by Rastonbury on 6/12/25, 6:40 PM
by botverse on 6/11/25, 7:02 PM
by thenthenthen on 6/11/25, 3:56 PM
by tdstein on 6/11/25, 1:42 AM
How rude.
by seydor on 6/10/25, 11:03 PM
"Rich" is a relative term, the existence of the rich requires the existence of the poor, and according to the article the rich will get richer much faster than the poor. There's nothing gentle about this singularity
by spacecadet on 6/10/25, 11:02 PM
by philipallstar on 6/11/25, 1:54 PM
by unchocked on 6/10/25, 10:07 PM
by GolfPopper on 6/11/25, 2:37 AM
by ChrisMarshallNY on 6/10/25, 10:13 PM
That’s heresy, with this crowd. Everyone wants to be The Gatekeeper, and get filthy rich.
by martinohansen on 6/12/25, 2:11 PM
by rsynnott on 6/11/25, 10:30 AM
I mean, I suppose Sam loves ChatGPT like his own child, but I would struggle to describe any of its output as 'beautiful'! 'Grating' would be the word that springs to mind, generally.
by w-hn on 6/12/25, 4:23 AM
> AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present
And then nothing substantial after this proclamatory hot-take. So let’s just choose to believe le ai propheté.
It’s like quick instructions to PR team style written post (and then asking an LLM to inflate it) from the comforts of a warm and cozy Japanese seat.
Definitely not written by Genini at least. Usually does a better job than this. Well, at least like Zuck he eats his own food that he killed.
Will be looking forward to titles like “The Vehement Duality” &c in the near future.
by jes5199 on 6/11/25, 3:38 AM
by chasing on 6/10/25, 10:18 PM
"Hey, it'd be a shame if somethin', uh, happened to that nice bit of expertise ya got there, y'know. A darn shame."
by lucas_membrane on 6/10/25, 10:05 PM
by zafka on 6/10/25, 10:22 PM
by egypturnash on 6/10/25, 11:08 PM
by rblion on 6/11/25, 11:23 PM
by d4rkn0d3z on 6/11/25, 9:21 AM
Sam you need to touch grass.
by akomtu on 6/11/25, 11:43 PM
I bet he wants to be the first to "plug in" and become the first AI enhanced human.
by tim333 on 6/12/25, 2:53 PM
by kvetching on 6/10/25, 9:56 PM
by gadders on 6/11/25, 12:31 PM
by Briannaj on 6/11/25, 2:18 AM
by Mikhail_K on 6/11/25, 9:39 AM
by satisfice on 6/10/25, 10:07 PM
Other people know a novel cannot be written by a machine— because a novel is human by definition, and AI is not human, by definition.
It’s like wondering if a machine can express heartfelt condolences. Certainly a machine can string words together that humans associate with other humans who express heartfelt condolences, but when AI does this they are NOT condolences. Plagiarized emotional phrases do not indicate the presence of feeling, just the presence of bullshit.
by decimalenough on 6/11/25, 2:27 AM
Yeah dawg imma need a citation for that. Or maybe by "the world" he means "Silicon Valley oligarchs", who certainly have been "entertaining" all sorts of "new policy ideas" over the past half year.
by sreekanth850 on 6/11/25, 11:17 AM
by NoGravitas on 6/11/25, 5:07 PM
by nektro on 6/11/25, 9:38 PM
by Mobius01 on 6/10/25, 10:11 PM
"probably", "if they embrace the new tools". Hard to read anything but contempt for the role of humans in creative endeavors here, he advocates for quantity as a measure of success.
by ryandv on 6/10/25, 10:49 PM
Not sure if wishful thinking trying to LARP-manifest this future into being, or just more unfalsifiable thinking where we can always be said to be past the event horizon and near to the singularity, given sufficiently underwhelming definitions of "event horizon" and "nearness."
by paxys on 6/10/25, 10:54 PM
by xg15 on 6/10/25, 10:07 PM
How about affordable housing?
by insane_dreamer on 6/11/25, 3:27 PM
No "the world" won't be getting richer. A small subset of individuals will be getting richer.
The "new policy ideas" (presumably to help those who are being f*d by all this) have been there all along. It's just that those with the wealth don't want to consider them. Those people having even more wealth does _not_ make them more likely to consider those ideas.
Honestly this drivel makes me want to puke.
by ninetyninenine on 6/11/25, 3:48 AM
I don't know if gentle is the right word. Maybe Gently face hugger and chest burster is more apt. It's slowly infecting us and eating us from the inside but it's so gently we don't even care. We just complain on HN while doing nothing about it.