by cocoflunchy on 3/18/25, 9:56 AM with 81 comments
by Dorialexander on 3/18/25, 12:34 PM
An important background is the imminent rise of actual LLM agents I discuss in the next post: https://vintagedata.org/blog/posts/designing-llm-agents
So answering to a few comments:
*The shift is coming relatively soon thanks to the latest RL breakthroughs (I really encourage to give a look at Will Brown talk). Anthropic and OpenAI are close to nail long multi-task sequences on specialized tasks.
*There are stronger incentives to specialize the model and gate them. They are especially more transformative on the industry side. Right now most of the actual "AI" market is still largely rule-based/ML. Generative AI was not robust enough but now these systems can get disrupted — not to mention many verticals with a big focus on complex yet formal tasks. I know large network engineering co are upscaling their own RL capacities right now.
*Open source AI is distanced so far due to lack of data/frameworks for large scale RL and tasks related data. Though we might see a democratization of verifiers, it will take time.
Several people from big labs reached out since then and confirmed that, despite the obvious uncertainties, this is relatively one point.
by smjburton on 3/18/25, 1:16 PM
This is a great observation on the current situation. Over the past few years, there's been a proliferation of AI wrappers in the SaaS space; however, because they're use proprietary models, they become entirely dependent on the model providers to continue to offer their solution, there's little to no barrier to entry to create a competing product, and they're providing free training data to the model providers. Instead, as the article suggests, SaaS builders should look into open source models (from places like Github, HuggingFace, or paperswithcode.com) or consider researching their own, and training custom models if they want to offer long-term services to their users.
by dimitri-vs on 3/18/25, 12:47 PM
It also doesn't help that they let you select a model varying from 4o-mini to o1-pro for the Deep Research task. But this confirms my suspicion that model selection is irrelevant for the Deep Research tasks and answering follow-up questions.
> Weirdly enough, while Claude 3.7 works perfectly in Claude Code, Cursor struggles with it and I've already seen several high end users cancelling their subscriptions as a result.
It's because Claude Code burns through tokens like there's no tomorrow, meanwhile Cursor attempts to carefully manage token usage and limit what's in context to remain profitable. It's gotten so bad that for any moderately complex task I switch to o1-pro or sonnet-3.7 in the Anthropic Console and max out the thinking tokens. They just released a "MAX" option but I can still tell its nerfed because it thinks for a few seconds whereas I can get up to 2 minutes of thinking via Anthropic Console.
Its abundantly clear that all these model providers are trying to pivot hard into productizing, which is ironic considering that the UX of all these model-as-a-product companies is so universally terrible. Deep Research is a major win, but OpenAI has plenty of fails: Plugins, Custom GPTs, Sora, Search (obsolete now?), Operator are maybe just okay for casual users - not at all a "product".
by mjburgess on 3/18/25, 12:13 PM
This is plausible insofar as one can find a reason to suppose compute costs for this specialisation will remain very high, and the hardwork of producing relevant data will be done best by those same companies.
I think its equally plausible compute will come down enough, and innovations in "post-training re-training" will occur, that you'll be able to bring this in-house within the enterprise/org. Ie., that "ML/AI Engineer" teams will arise like SEng teams.
Or that there's a limit to statistical modelling over historical cases, that means specailisation is so exponentially demanding on historical case data production, that it cannot practically occur in places which would most benefit from it.
I think the latter is what will prevent the mega players in AI atm making "the model the product" -- at the level they can specialise (ie., given the amount of data needed), so can everyone else.
Perhaps these companies will transition into something SaaS-like, AI-Model-Specialisation-As-A-Service (ASS ASS) -- where they create bespoke models for orgs which can afford it.
by jjmarr on 3/18/25, 12:55 PM
I've been using Cline so I can understand the pricing of these models and it's insane how much goes into input context + output. My most recent query on openrouter.ai was 26,098 input tokens -> 147 output tokens. I'm easily burning multiple dollars an hour. Without a doubt there is still demand for cheaper inference.
by lukev on 3/18/25, 2:48 PM
This article is talking about models that have been trained specifically for workflow orchestration and tool use. And that is an important development.
But the fundamental architectural pattern isn't different: You run the model in some kind of harness that recognizes tool use invocations, calls to the external tool/rag/codegen/whatever, then feeds the results back into the context window for additional processing.
Architecturally speaking, the harness is a separate thing from the language model. A model can be trained to use Anthropic's MCP, for example, but the capabilities of MCP are not "part" of the model.
A concrete example: A model can't read a webpage without a tool, just like a human can't read a webpage without a computer and web browser.
I just feel like it's important to make a logical distinction between a model and the agentic system using that model. Innovation in both areas is going to proceed along related but different paths.
by huijzer on 3/18/25, 12:28 PM
Even before DeepSeek, the prices were declining by about 90% per year when keeping performance constant. The way to think about economics is different I think. Think of it as any other industry that is on a learning curve like chips, batteries, solar panels, or you name it. The price in these industries keeps falling each year. The winners are the companies that can keep scaling up their production. Think TSMC for example. Nobody can produce high quality chips for a lower price than TSMC due to economies of scale. For instance, one PhD at the company can spend 4 years optimizing a tiny part of the process. But it’s worth it because if it makes the process 0.001% cheaper to run then the PhD paid itself back on the TSMC scale.
So the economics for selling tokens does work. The question is who can keep scaling up long enough so that the rest (has to) give up.
by bambax on 3/18/25, 1:10 PM
To me a "model" is a static file containing numbers. In front of that file is an inference engine that receives input from a user, runs it through the "model" and outputs the result. That inference engine is a program (not a static file) that can be generic (can run any number of models of the same format, like llama.cpp) or specific/proprietary. This program usually offers an API. "Wrappers" talk to those APIs and therefore, don't do much (they're neither an inference engine, nor a model) -- their specialty is UI.
But in this post it seems the term "model" covers a kind of full package that goes from LLM to UI, including a specific, dedicated inference engine?
If so, the point of the article would be that, because inference is in the process of being commoditized, the industry is moving to vertical integration so as to protect itself and create unique value propositions.
Is this interpretation correct?
by roenxi on 3/18/25, 11:48 AM
I don't see where that is coming from. Capacities aren't really measureable in that way. Computers either can do something like PdD level mathematics research more or less under their own power or they cannot with a small period of ambiguity as subhuman becomes superhuman. This process seems to me to have been mostly binary with relatively clear tipping points that separate models that can't do something from models that can. That isn't easily mapped back to any sort of growth curve.
Regardless, we're in the stage of the boom where people are patenting clicking a button to purchase goods and services thinking that might be a tricky idea. It isn't clear yet what parts of the product are easy and standard and what parts are difficult and differentiators. People who talk in vague terms will turn out to be correct and specific predictions will be right or wrong at random. It is hard to stress how young all these practical models are. Stable diffusion was released in 2022, and ChatGPT is younger than that - almost yesterday years old; this stuff is early days magic.
Models could easily turn out to be a commodity.
by pkdpic on 3/18/25, 3:00 PM
I like the idea of a model being able to create and maintain a full codebase representing the app layer for model-based tools but in practical terms at work and on personal projects I still just don't see it. To get a model to write even a small-scale frontend only app I still have to make functions so atomic and test them to the point where it feels close to the time it would take to write the app manually. And if I ask a model to write larger functions or don't test them / edit them through 3-5 rounds of re-prompting I just end up with code debt that makes the project unrealistic to continue building out beyond a pretty limited MVP stage without going back line by line and basically rewriting the whole thing.
Anyway I'm no power user, curious what other people's experience is. Maybe I'm just using the wrong models.
by TrackerFF on 3/18/25, 3:55 PM
I'm starting to think that if you can control your data, you'll have somewhat of an edge. Which I think could lead to people being more protective of their data. Guess we'll move more and more in the direction of premium paid data streams, while making scraping as hard as possible.
At least in the more niche fields, that work with data that isn't very commonplace and out there for everyone to download.
Kind of sucks for the open source crowds, non-profits, etc. that rely on such data streams.
by piokoch on 3/18/25, 12:12 PM
I hope the author is wrong and still there will be someone who would like to make money on "selling tokens" not end-to-end closed solutions. But indeed, market surely would seek for added value.
by rbren on 3/18/25, 1:25 PM
To be a bit hyperbolic, this is like saying all SaaS companies are just "compute wrappers", and are dead because AWS and GCP can see all their data and do all the same things.
I like to say LLMs are like engines, and we're tasked with building a car. So much goes into crafting a safe, comfortable, efficient end-user experience, and all that sits outside the core competence of companies that are great at training LLMs.
And there are 1000s of different personas, use cases, and workflows to optimize for. This is not a winner-take-all space.
Furthermore, the models themselves are commoditizing quickly. They can be easily swapped out for one another, so apps built on top of LLMs aren't ever beholden to a single model provider.
I'm super excited to have an ecosystem with thousands of LLM-powered apps. We're already starting to see it materialize, and I'm psyched to be part of it.
by bjornsing on 3/18/25, 12:02 PM
> Generalist scaling is stalling. This was the whole message behind the release of GPT-4.5: capacities are growing linearly while compute costs are on a geometric curve. Even with all the efficiency gains in training and infrastructure of the past two years, OpenAI can't deploy this giant model with a remotely affordable pricing.
> Inference cost are in free fall. The recent optimizations from DeepSeek means that all the available GPUs could cover a demand of 10k tokens per day from a frontier model for… the entire earth population. There is nowhere this level of demand. The economics of selling tokens does not work anymore for model providers: they have to move higher up in the value chain.
Wouldn’t the market find a balance then, where the marginal utility of additional computation is aligned with customer value? That fix point could potentially be much higher than where are now in terms of compute.
by mromanuk on 3/18/25, 2:13 PM
I'm not convinced, we tend to think in terms of problem-products(solutions), for example editing an image => photoshop, writing some document => word. I doubt that we are going to move to a "Any problem => model". That's what ChatGPT is experimenting with the "calendaring/notification". It breaks the concept that one brand solves one problem. The App store is a good example, there are millions of apps. I find it really hard that the "apps" can get inside the "model" and expect that the model will "generate an app tailored" for that problem at that moment, many new problems will emerge.
by kohlerm on 3/18/25, 12:15 PM
by cellis on 3/18/25, 3:27 PM
by lubujackson on 3/18/25, 4:42 PM
So AI will be more directed and opinionated, but also much easier to use for common tasks. And the "renting a server" option doesn't go away, just becomes less relevant for anyone in the middle of the bell curve.
by bob1029 on 3/18/25, 2:03 PM
In the 2nd figure, I think we have a viable pattern if you consider "Human" to be part of "Environment". Hypothetically, if one of the available functions to the LLM is something like AskUserQuestion(), you can flip the conversation mode around and have the human serve as a helpful agent during the middle of their own request.
by dartos on 3/18/25, 3:24 PM
Will specialized models also hit a usefulness wall like general models do? (I believe so)
And
Will the model’s blindspots hurt more than the value a model creates? (Much more fuzzy and important)
If so, then even many specialized models will be a commodity and the application on top will still be the product end users will care about.
If not, then we’ll finally see the return on all this AI spending. Tho I think first movers would be at a disadvantage since they need much higher ROI to overcome the insane cost spent on training general models.
by trash_cat on 3/18/25, 5:12 PM
In simple terms, performing a relatively simple RL on various tasks is what gives the models the emergent properties like DeepSeek managed to do with multi step reasoning.
The reasoning models and DeepSearch models are essentilly of the same class, but applied on different types of tasks.
The underlying assumption then is that these "specialized" models is the next step in the industry, as the general models will get outperformed (maybe).
by Jonovono on 3/18/25, 5:35 PM
by jimkoen on 3/19/25, 9:56 AM
What model is the author talking about? I would pay for that. Is Claude really THIS good, that it could manage the codebase of say, PostgreSQL?
by DebtDeflation on 3/18/25, 12:18 PM
> What most agent startups are currently building is not agents, it's workflows, that is "systems where LLMs and tools are orchestrated through predefined code paths." Workflows may still bring some value
While this viewpoint will likely prove correct in the long run, we are pretty far away from that. Most value in an Enterprise context over the next 3-5 years will come from embedding AI into existing workflows using orchestration techniques, not from fully autonomous agents doing everything end to end "internally".
by gmaster1440 on 3/18/25, 12:41 PM
Highly agree with the sentiments expressed in this post, I wrote about something similar in my blog post on "Artificial General Software": https://www.markfayngersh.com/posts/artificial-general-softw...
by vessenes on 3/18/25, 1:29 PM
A couple of thoughts — as you note hard infra / training investment has slowed in the last two years. I don’t think this is surprising, although as you say, it may be a market failure. Instead, I’d say it’s circumstance + pattern recognition + SamA’s success.
We had the bulk of model training fundraising done in the last vestiges of ZIRP, at least from funds raised with ZIRP money, and it was clear from OpenAI’s trajectory and financing that it was going to be EXXXPPPENSIVE. There just aren’t that many companies that will slap down $11bn for training and data center buildout — this is out of the scale of Venture finance by any name or concept.
We than had two eras of strategy assessment: first — infrastructure plays can make monopolies. We got (in the US) two “new firm” trial investments here — OpenAI, and ex-OpenAI Anthropic. We also got at least Google working privately.
Then, we had “there is no moat” as an email come out, along with Stanford’s (I believe Alpaca? Precursor to llama) and a surge in interest and knowledge that small datasets pulled out of GPT 3/3.5/(4?) could very efficiently train contender models and small models to start doing tasks.
So, we had a few lucky firms get in while the getting was good for finance, and then we had a spectacularly bad time for new entrants: super high interest rates (comparatively) -> smaller funds -> massive lead by a leader that also weirdly looked like it could be stolen for $5k in API calls -> pattern recognition that our infrastructure period is over for now until there’s some disruption -> no venture finance.
I think we could call out that it’s remarkable, interesting and foresighted that Zuck chose this moment to plow billions into building an open model, and it seems like that may pay off for Meta — it’s a sort of half step ahead of the next gen tech in training know how and iron and a fast follower to Anthropic and OpenAI.
I disagree with your analysis on inference, though. Stepping back a level from the trees of raw tokens available to the forest of “do I have enough inference on what I want inferred at a speed that I want right now?” The answer is absolutely not, by probably two orders of magnitude. With the current rise of using inference to improve training, we’re likely heading into a new era of thinking about how models work and improving them. The end-to-end agent approach you mention is a perfect example. These queries take a long time to generate, in the ten minute range often, from OpenAI. When they’re under a second, Jevon’s paradox seems likely to make me want to issue like ten of them to compare / use as a “meta agent”.. Combined with the massive utility of expanded context and the very real scaling problems with expanding attention into the millions of tokens range, and we have a ways to go here.
Thanks again, appreciated the analysis!
by bloomingkales on 3/18/25, 2:06 PM
The model is the talent. A talented model is good, but you need to know how to use it.
by aubanel on 3/18/25, 4:56 PM
Hard disagree. 1. "Capacities are growing linearly while compute costs are on a geometric curve" is the very definition of scaling. GPT4.5 continuing this trend is the opposite of stalling: it's the proof that scaling continues to work 2. "OpenAI can't deploy this giant model with a remotely affordable pricing" WTF? Gpt-4.5 has the same price per token than GPT-4 at release. It seems high compared to other models, but is still dirt cheap compared to human labor. And this model's increased quality means it is the only viable option for some tasks. I have needed proofreading for my book: o1 or o3-mini were not up to the task, but gpt-4.5 really helps. GPT-4.5 is also a leap forward on agentic capabilities. So of course I'll pay for this, it saves me hours by enabling new use-cases
by dgfitz on 3/18/25, 1:27 PM
I have no desire to pay for any of these “products” even a little bit.