by lwo32k on 5/30/25, 1:38 PM with 1240 comments
by simonsarris on 5/31/25, 3:20 AM
just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
by idkwhattocallme on 5/30/25, 2:43 PM
by tdeck on 5/31/25, 2:17 AM
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
by sevensor on 5/30/25, 2:40 PM
by CKMo on 5/30/25, 8:56 PM
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
by snowwrestler on 5/31/25, 1:34 AM
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
by michaeldoron on 5/30/25, 3:07 PM
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
by darth_avocado on 5/30/25, 2:37 PM
by CSMastermind on 5/30/25, 11:55 PM
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
by qgin on 5/31/25, 1:46 AM
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
by deadbabe on 5/30/25, 9:51 PM
If you don’t snatch up the smartest engineers before your competition does: you lose.
Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.
Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.
AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.
by spcebar on 5/30/25, 3:06 PM
by bachmeier on 5/30/25, 3:14 PM
"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.
Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:
"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."
by monero-xmr on 5/30/25, 2:35 PM
It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.
I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem
by golol on 5/30/25, 2:40 PM
I truly belive these types of paper don't deserve to be valued so much.
by hansmayer on 5/31/25, 11:39 AM
by fny on 5/30/25, 2:52 PM
This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:
1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.
I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.
And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.
Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.
For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?
by chris_armstrong on 5/30/25, 11:01 PM
Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.
by elktown on 5/30/25, 6:11 PM
by infinitebit on 5/31/25, 6:00 AM
(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)
by Lu2025 on 5/31/25, 7:08 PM
by keybored on 5/30/25, 2:39 PM
Exactly. These people are growth-seekers first, domain experts second.
Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.
[1] American news outlets that lean social-democratic
by K0balt on 5/31/25, 12:28 PM
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
by econ on 5/31/25, 4:01 AM
by jona777than on 5/31/25, 1:22 PM
The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.
by josefritzishere on 5/30/25, 3:05 PM
by joshdavham on 5/31/25, 12:24 AM
Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?
by AnimalMuppet on 5/30/25, 3:01 PM
Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)
by veunes on 5/31/25, 12:53 PM
by HenryBemis on 5/31/25, 8:50 PM
This reminds me the "Walter White" meme "I am the documentation". When the CEO of a company that makes LLM says something like that, "I perk up and listen" (to quote the article).
When a doctor says "water in my village is bad quality, it gives diarrhea to 30% of the villagers", I don't need a fancy study from some university. The doctor "is the documentation". So if the Anthropic/ChatGPT/LLaMa/etc. (mixing companies and products, it's ok though) say that "so-and-so", they see the integrations, enhancements, compliments, companies ordering _more_ subscriptions, etc.
In my current company (high volume, low profit margin) they told us "go all in on AI". They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks. So, paying $20-$30-$40 per person, per month, and that thing improving the productivity/output of an FTE by 20%-30% is a massive win.
So yes, we keep the ones we got (because mass firings, ministry of 'labour', unions, bad marketing, etc.). Headcount will organically be reduced (retirements, getting a new job, etc.) combined with minimizing new hires, and boom! savings!!
by WaltPurvis on 5/30/25, 10:44 PM
I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.
by Animats on 5/30/25, 9:03 PM
"Move fast and break things" - Zuckerberg
"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton
by 1vuio0pswjnm7 on 5/30/25, 5:01 PM
Yet when tech CEOs do the same thing, people tend to perk up."
Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.
For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.
If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.
No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.
by bayareapsycho on 6/1/25, 5:51 AM
The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021
I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite
by cadamsdotcom on 5/30/25, 6:09 PM
I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”
by ghm2180 on 5/31/25, 5:04 AM
The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.
If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.
by trhway on 5/30/25, 10:54 PM
by phendrenad2 on 5/30/25, 3:22 PM
I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.
I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?
by johnwheeler on 5/30/25, 2:40 PM
by ck2 on 5/30/25, 9:16 PM
Think of it as an IQ test of how new technology is used
Let me give you an easier example of such a test
Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year
Do you think the world will become more peaceful or much more war?
If you think peaceful, you fail, of course more war, it's all about oppression
It's always about the few controlling the many
The "freedom" you think you feel on a daily basis is an illusion quickly faded
by ArtTimeInvestor on 5/30/25, 3:17 PM
It flickers for a moment, then it either says
"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"
or
"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"
How would you use that information to invest in the stock market?
by topherPedersen on 5/31/25, 4:48 AM
by globalnode on 5/31/25, 12:59 AM
by randomname4325 on 5/31/25, 3:47 AM
by ggm on 5/30/25, 10:01 PM
Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.
Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind
by dottjt on 6/1/25, 2:52 AM
by indigoabstract on 6/1/25, 11:27 AM
1. cure cancer
2. fix the economy
3. keep everybody happily employed.
And he's saying we can only pick two, or pick one. Except for the last one, that's not really an option.
by leeroihe on 5/30/25, 8:37 PM
by throwaway48476 on 5/31/25, 10:57 PM
by DrillShopper on 5/30/25, 2:34 PM
by rjurney on 5/30/25, 3:29 PM
by smeeger on 5/30/25, 11:26 PM
by stephc_int13 on 5/30/25, 11:30 PM
I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.
Anyone paying attention should understand this fact by now.
There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.
by arthurcolle on 5/31/25, 3:17 AM
by givemeethekeys on 5/31/25, 4:14 PM
by bawana on 5/30/25, 8:44 PM
by givemeethekeys on 5/31/25, 4:11 PM
Even older people prefer to hire younger people.
by notyouraibot on 5/31/25, 7:58 AM
by osigurdson on 5/31/25, 1:16 AM
by nova22033 on 5/31/25, 8:40 PM
by whynotminot on 5/30/25, 2:40 PM
But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.
It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”
People like this have banked their futures on AI not working out.
by bawana on 5/30/25, 8:44 PM
by franczesko on 5/31/25, 6:06 PM
by infinitebit on 5/31/25, 5:46 AM
(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)
by gcanyon on 5/30/25, 11:50 PM
by atleastoptimal on 5/31/25, 9:23 AM
however there seems to be a big disconnect on this site and others
If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.
However many people don’t believe AGI is possible, thus will never consider those possibilities
I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.
by brokegrammer on 5/31/25, 6:12 AM
by Warh00l on 6/1/25, 2:24 PM
by paulluuk on 5/30/25, 2:48 PM
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
by theawakened on 5/31/25, 10:04 AM
by jatora on 5/31/25, 8:56 AM
It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.
I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.
One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.
Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.
You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.
AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.
I consistently make 200k+ token queries of code and context and receive highly accurate results.
I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.
The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.
Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.
The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.
I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.
by rule2025 on 5/31/25, 7:44 AM
History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.