from Hacker News

Generative AI hype peaking?

by bwestergard on 3/10/25, 5:02 PM with 136 comments

  • by o_nate on 3/10/25, 5:34 PM

    There's an old game in the investing world of trying to time the top of a stock bubble by picking out the most breathless headlines and magazine covers, looking for statements such as the famous 1929 quote from two weeks before the market crash: "Stock prices have reached what looks like a permanently high plateau." By that metric, we may be getting close to the top of the AI hype bubble, with headlines such as the one I saw recently in the NY Times for an Ezra Klein column: "The Government Knows A.G.I. Is Coming".
  • by breckenedge on 3/10/25, 5:29 PM

    This article is way too light on the details. Does it conflate Nvidia’s stock price with interest in generative AI? New use cases for it are arriving every month. 9 months ago I was amazed to use Cursor, and was leading getting my team to switch to it. 3 months ago it was that Cursor had added agents and trying to again demonstrate their benefits to my colleagues. Now I’m using Cline + Claude 3.7 and more productive than I’ve ever been — and I haven’t even touched MCPs yet.

    Definitely not yet peaked IMO. However yea, I don’t see it fully replacing developers in the next 1-2 years — still gets caught in loops way too often and makes silly mistakes.

  • by daedrdev on 3/10/25, 5:29 PM

    Stocks are down because the president of the US has entered a costly trade war, actually.
  • by hnthrow90348765 on 3/10/25, 5:49 PM

    >We may look back in a decade and lament how self-serving and short-sighted employers stopped hiring less experienced workers, denied them the opportunity to learn by doing, and thereby limited the future supply of experience developers.

    I think bootcamps will bloom again and companies will hire people from there. The bootcamp pipeline is way faster than 4 year degrees and easy to spin up if the industry decides the dev pipeline needs more juniors. Most businesses don't need CompSci degrees for the implementation work because it's mostly CRUD apps, so the degree is often a signal of intellect.

    This model has a few advantages to employers (provided the bootcamps aren't being predatory) like ISAs and referrals. Bootcamp reputations probably need some work though.

    What I think will go away is the bootstraps idea that you can self-teach and do projects by yourself and cold-apply to junior positions and expect an interview on merit alone. You'll need to network to get an 'in' at a company, but that can be slow. Or do visible open source work which is also slow.

  • by rvz on 3/10/25, 5:59 PM

    This is the year 1999 again. You have companies that are valued tens of billions with no product AND no revenue.

    There is also a race to zero where the best AI models are getting cheaper and big tech is there attempting to kill your startup (again) by lowering prices until it is free for as long as they want it.

    More YC startups accepted are so-called AI startups are just vehicles for OpenAI to copy the best one and for the rest of the 90% of them to die.

    This is an obvious bubble waiting to burst. With Big Tech coming out stronger, AI frontier companies becoming a new elite group "Big AI" and the so-called other startups getting wiped out.

  • by jsight on 3/10/25, 5:24 PM

    If the average person has still not ridden a self driving car, assembled by figure 02 style robots, though a drive thru with AI ordering, then we aren't even close to seeing the real peak here.

    >100x growth ahead for sure.

  • by qoez on 3/10/25, 5:43 PM

    One thing I'd love to short is the idea that we're going to have a second AI winter. Lots of people predict it but I believe this time is actually a real step function innovation (and last time was caused by it being a very distant research project and money dried out because competition with the much more lucrative internet which was growing at the same time).
  • by cenobyte on 3/10/25, 5:34 PM

    Anyone who thinks the Hype has peaked is obviously too young to remember the dotcom bubble.

    It will get so much worse before it starts to fade.

    Infecting every commercial, movie plot, and article that you read.

    I can still here the Yahoo yodel in my head from radio and TV commercials.

  • by zzzeek on 3/10/25, 5:30 PM

    Sorry, did you not notice the advertisement for "AI Startup School" at the bottom of Hacker News ? Ixnay on the egativity-nay, my friend !
  • by siliconc0w on 3/10/25, 5:59 PM

    IMO Grok and 4.5 show the we've reached the end of reasonable pre-training scaling. We'll see how far we can get with RL in post-training but I suspect we're pretty close to maxed there and will start seeing diminishing returns. The rest is just inference efficiency, porting the gains to smaller models, and building the right app-layer infrastructure to take advantage of the technology.

    I do think we're overbuilding on Nvidia and the CUDA moat isn't as big as people think, inference workloads will dominate, and purpose-built inference accelerators will be preferred in the next hardware-cycle.

  • by ypeterholmes on 3/10/25, 5:49 PM

    So Deep Research and the latest reasoning models don't deserve mention here? I wish there was accountability on the internet, so that people posting stuff like this can be held accountable a year from now.
  • by skepticATX on 3/10/25, 5:43 PM

    The industry only has themselves to blame. When you promise literal utopia and inevitably don’t deliver, you can’t be surprised by what happens next.
  • by _cs2017_ on 3/10/25, 5:59 PM

    Skeptical as I am about the generative AI, the quality of this particular article (in terms of evidence provided, logic, insights, etc) is substantially lower than ChatGPT / Gemini DeepResearch can generate. If I was grading, I'd rate an average (unedited) AI DeepResearch report at 3/10, and the headline article at 1/10.
  • by zekenie on 3/10/25, 6:54 PM

    Idk I used Claude Code recently and revised all my estimates. Even if the models stop getting better today I think every product has years of runway before they incorporate these things effectively.
  • by gdubs on 3/10/25, 5:45 PM

    Something I've been saying for two years now is that AI is the most over-hyped and the most under-hyped technology, simultaneously.

    On the one hand it has been two years of "x is cooked because this week y came out..." and on the other hand, people who seem to have formed their opinions based on ChatGPT 3.5 and have never checked in again on the state-of-the-art LLMs.

    In the same time period, social media has done its thing of splitting people into camps on the matter. So, people – broadly speaking, no not you wise HN reader – are either in the "AI is theft and slop" camp or the "AI will bring about infinite prosperity" camp.

    Reality is way more nuanced, as usual. There are incredible things you can do today with AI that would have seemed impossible twenty years ago. I can quickly make some python script that solves a real-world problem for me, by giving fuzzy instructions to a computer. I can bounce ideas off of an LLM and, even if it's not always 'correct', it's still a valuable rubber-ducky.

    If you look at the pace of development – compare MidJourney images from a few years ago to the relatively stable generative video clips being created today – it's really hard to say with a straight face that things aren't progressing at a dizzying rate.

    I can kind of stand in between these two extreme points of view, and paradigm-shift myself into them for a moment. It's not surprising that creative people who have been promised a wonderful world from technology are skeptical – lots of broken promises and regressions from big tech over the past couple of decades. Also unclear why suddenly society would become redistributive when nobody has to work anymore, when the trend has been a concentration of wealth in the hands of the people who own the algorithms.

    On the other hand, there is a lot of drudgery in modern society. There's a lot of evolution in our brains that's biased to roaming around picking berries and playing music and dancing with our little bands. Sitting in traffic to go sit in a small phone both and review spreadsheets is something a lot of people would happily outsource to an AI.

    The bottom line – if there is one – is that uncertainty and risk are also huge opportunities. But, it's really hard for anyone to say where all of this is actually headed.

    I come back to the simultaneity of over-hyped/under-hyped.

  • by OldGreenYodaGPT on 3/10/25, 5:48 PM

    Peaked? Nah, it's barely started. Wait till we get decent SWE agents reliably writing good code, probably later this year or next. Once AI moves beyond simple boilerplate, the productivity boost will be huge. Too soon to call hype when we've barely scratched the surface.
  • by ninetyninenine on 3/10/25, 5:48 PM

    I still say it’s too early to tell.

    It took a decade to reach LLMs. It will likely be another decade for agi. There is still clear trendline progress and we have clear real world targets of actual human level intelligence that exists so we know it can be done.

  • by th0ma5 on 3/10/25, 5:23 PM

    Is saying that you're critical of AI the new approach to being uncritical of it?
  • by codingwagie on 3/10/25, 5:34 PM

    People are just click farming with these posts. The technology is ~4 years old. We are in the infancy of this, with hundreds of billions of capital behind making these systems work. Its one of the biggest innovations of the last 100 years.

    I propose an internet ban for anyone calling the generative ai top, and a public tar and feathering

  • by adpirz on 3/10/25, 5:41 PM

    Having used the latest models regularly, it does feel like we're at diminishing returns in terms of raw performance from GenAI / LLMs.

    ...but now it'll be exciting to let them bake. We need some time to really explore what we can do with them. We're still mostly operating in back-and-forth chats, I think there's going to be lots of experimentation with different modalities of interaction here.

    It's like we've just gotten past the `Pets.com` era of GenAI and are getting ready to transition to the app era.

  • by edanm on 3/11/25, 7:28 AM

    This article is only an opinion piece with no real evidence to back it up. I disagree with most of it. I'd argue against specifics, but there are no real specifics in the article, so I'm not sure I can do any better than say "no, I think you're wrong".

    I also think it does the common-but-wrong thing of conflating between investment in big AI companies, and how useful GenAI is and will be. It's completely possible for the investments in OpenAI to end up worthless, and for it to collapse completely, while GenAI still ends up as big as most people clailm.

    Lastly, I think this article severely downplays how useful LLMs are now.

    > In my occupation of software development, querying ChatGPT and DeepSeek has largely replaced searching sites like StackOverflow. These chatbots generally save time with prompts like "write a TypeScript type declaration for objects that look like this", "convert this function from Python to Javascript", "provide me with a list of the scientific names of bird species mentioned directly or indirectly in this essay".

    I mean, yes, they do that... but there are tools today that are starting to be able to look at a real codebase, get a prompt like "fix this bug" or "implement this feature", and actually do it. None of them are perfect yet, and they're all still limited... but I think you have to have zero imagination to think that they are going to stop exactly here.

    I think even with no fundamental advances in the underlying tech, it is entirely possible we will be replacing most programming with prompting. I don't think that will make software devs obsolete, it might be the opposite - but "LLMs are a slightly better StackOverflow" is a huge understatement.

  • by mordae on 3/10/25, 6:50 PM

    > has slackened modestly compared to late-2019 due to higher interest rates, the job market for less experienced developers seems positively dire.

    Maybe in the US.

  • by sunami-ai on 3/10/25, 5:34 PM

    This is all BS tbh. People don’t know how to use the current gen AI take to do very useful reasoning.

    I keep posting our work as an example and NO ONE here (Old HN is dead) has managed to point out any reasoning issues (we redacted the in-between thinking most recently like the thinking traces that people were treating as the final answer)

    I dare you to tell me this is not useful when we are signing up customers daily for trial:

    Https://labs.sunami.ai/feed

  • by mirawelner on 3/10/25, 5:55 PM

    THANK

    GOD

  • by dwedge on 3/10/25, 5:42 PM

    The thing that puts me off AI the most is that I feel it's only free/cheap while we train it, and in 3 or 4 years it will be a few thousand a month and only available for corporate