by pcurve on 1/28/25, 1:17 AM with 132 comments
by openrisk on 1/28/25, 8:10 AM
Deepseek followed llama and will be followed by others in the usual mushroom fashion of open source. People really dont appreciate the magnitude of the disruptive force that is unleashed by the open source paradigm. In a year from now the landscape will be brimming with new initiatives. In a few years nobody will even remember "open"ai.
Conventional economic theory will always misread the future of computing (and thus "AI"). The zero marginal cost and infinite replicability is not a bug, its a feature. But so far we dont really have a good model how to think about it and merge it with mainstream business models. Something must pay the bills eventually but these are very different bills from those of conventional scarcity based businesses. Ironically in the end the main scarcity is human ingenuity. Read the interview of the Deepseek founder on why their models are open source.
by mbowcut2 on 1/28/25, 6:00 AM
by nextworddev on 1/28/25, 4:34 AM
VCs (esp those who missed out on OAI) are heavily incentivized to root for OAI to fail, and commoditize the biggest COGs item (AI models).
This guy is just talking his book.
by Stokley on 1/28/25, 4:13 AM
by elijahbenizzy on 1/28/25, 5:03 AM
IMO the equivalent of moores law for AI (both on software and hardware development) is baked into the price, which doesn’t make the valuation all too crazy.
by trhway on 1/28/25, 4:20 AM
And here comes DeepSeek and takes the steam out of this and the cost arguments that follow it.
by blackeyeblitzar on 1/28/25, 5:01 AM
I also still don’t believe their cost figures, and think they’re leaving out the capital to acquire their secret GPU stash and the cost of pre training their base model (DeepSeek-V3-base). I also suspect their training corpus, which they’ve only vaguely described, would reveal the savings came from working off other foundational models’ work without counting those costs in their figure.
For now, I treat the cost claim as simply a calculated strategy for China to not look like they’re behind in the most important race, to prevent investors from continuing to boost US technology by causing them to doubt the ROI, and to take value out of the US stock market as they did today.
by tempeler on 1/28/25, 5:20 AM
by stego-tech on 1/28/25, 5:15 AM
One point I’ll agree on is his final one: that the true big players haven’t even been founded yet. Right now, the AI hype seems to still revolve around the dream of replacing humans with machines and still magically making Capitalism work in the process, which is something I (and other “contrarians”) have beaten to death in other threads. That said, what these companies have managed to demonstrate is that transformer-based predictive models are a part of the future - just not AGI.
If I were a VC, I’d be looking at startups that take the same training techniques but apply them in niche fields with higher success rates than general models. An example might be a firm that puts in the grunt work of training a foundational model in a specific realm of medicine, and then makes it easier for a hospital network to run said model locally against patient data while also continuously training and fine-tuning the underlying model. I wouldn’t want to get into the muck of SaaS in these cases, because data sovereignty is only going to become an ever-thornier issue in the coming decades, and these prediction models can leak user data like a sieve if not implemented correctly. Same goes for other narrow applications, like single-mode logistics networks or on-site hospitality interfaces. The real money will be in the ability to run foundational models against your own data in privacy and security, with inference at the edge or on-device rather than off in a hyperscaler datacenter somewhere.
Then again, I could be totally wrong. Guess we’ll all find out together.
by dralley on 1/28/25, 4:16 AM
The (regrettably temporary) ousting of Sam Altman looks like the right call, in hindsight. Of course some amount of showmanship is expected, but the extreme nature of this self-serving BS is just laughable.
6 months from now we may be looking at Sam Altman the way we look at Adam Neumann.
by a13n on 1/28/25, 5:51 AM
I feel like the author's concluding point contradicts himself. There is a gold rush and OpenAI is selling shovels.
by alephnerd on 1/28/25, 3:46 AM
by halfcat on 1/28/25, 4:39 AM
That’s where we are in the AI journey in 2025. The year 2000.
by tonyhart7 on 1/28/25, 4:37 AM
and that valuation would crumble because of deepseek
by poorcedural on 1/28/25, 5:46 AM
by refulgentis on 1/28/25, 5:24 AM
I can't admit to myself there's any open question as to if there is any long-term value.
I expect within 2 years, this will seem like a non-controversial idea, and it won't bring in a ton of assumptions about the speaker.
I have invested much time and effort making sure local models are a peer to remote ones in my app, and none, including DeepSeek's local models, are remotely close to the things needed to make that flow work.
EDIT: Reply-throttled, so answering replies here:
- The machine is building the machine: Telosnex, a cross-platform Flutter app
- it can do 90% of the scope, especially after I wrote precanned instructions for doing e.g. property-based testing.
- Things it's done mostly wholesale: -- secure iframe environment, on all 6 platforms, to: execute JS in, or render react components it wrote. -- completely refactoring my llama.cpp inference to use non-deprecated APIs.
- Codebase is about 40K real lines of code. (I have to think this helps a lot I doubt that ex. from scratch it would be able to build a Flutter app that used llama.cpp.)
- $30/day!?! -- Yeah, it's crazy, its up an order of magnitude from my most busy days when I just copy-pasted back and forth. It reads as much code as it wants, and you're doing more work literally, so it adds up.
- $20/day is realistic average
- Lines added per day +55%, lines deleted per day +29%, files changed per day 9 -> 21 https://x.com/jpohhhh/status/1881453489852948561
by ripped_britches on 1/28/25, 6:21 AM
by Jasondells on 1/28/25, 1:35 PM
First, OpenAI’s valuation is a bit wild—$157B on 13.5x forward revenue? That’s Meta/Facebook-level multiples at IPO, and OpenAI’s economics don’t scale the same way. Generative AI costs grow with usage, and compute isn’t getting cheaper fast enough to balance that out. Throw in the $6B+ infrastructure spend for 2025, and yeah, there’s a lot of financial risk. But that said... their growth is still insane. $300M monthly revenue by late 2023? That’s the kind of user adoption that others dream about, even if the profits aren’t there yet.
Now, the “no moat” argument... sure, DeepSeek showed us what’s possible on a budget, but let’s not pretend OpenAI is standing still. These open-source innovations (DeepSeek included) still build on years of foundational work by OpenAI, Google, and Meta. And while open models are narrowing the gap, it’s the ecosystem that wins long-term. Think Linux vs. proprietary Unix. OpenAI is like Microsoft here—if they play it right, they don’t need to have the best models; they need to be the default toolset for businesses and developers. (Also, let’s not forget how hard it is to maintain consistency and reliability at OpenAI’s scale—DeepSeek isn’t running 10M paying users yet.)
That said... I get the doubts. If your competitors can offer “good enough” models for free or dirt cheap, how do you justify charging $44/month (or whatever)? The killer app for AI might not even look like ChatGPT—Cursor, for example, has been far more useful for me at work. OpenAI needs to think beyond just being a platform or consumer product and figure out how to integrate AI into industry workflows in a way that really adds value. Otherwise, someone else will take that pie.
One thing OpenAI could do better? Focus on edge AI or lightweight models. DeepSeek already showed us that efficient, local models can challenge the hyperscaler approach. Why not explore something like “ChatGPT Lite” for mobile devices or edge environments? This could open new markets, especially in areas where high latency or data privacy is a concern.
Finally... the open-source thing. OpenAI’s “open” branding feels increasingly ironic, and it’s creating a trust gap. What if they flipped the script and started contributing more to the open-source ecosystem? It might look counterintuitive, but being seen as a collaborator could soften some of the backlash and even boost adoption indirectly.
OpenAI is still the frontrunner, but the path ahead isn’t clear-cut. They need to address their cost structure, competition from open models, and what comes after ChatGPT. If they don’t adapt quickly, they risk becoming Yahoo in a Google world. But if they pivot smartly—edge AI, better B2B integrations, maybe even some open-source goodwill—they still have the potential to lead this space.
by coldpepper on 1/28/25, 4:21 AM
by andrewmcwatters on 1/28/25, 4:41 AM