from Hacker News

How Does OpenAI Survive?

by fredski42 on 8/1/24, 2:18 AM with 188 comments

  • by sweettea on 8/1/24, 3:23 AM

    A well reasoned article that fundamentally downplays both the pace of innovation and the exponential increase in capabilities per dollar happening over time. AI is rapidly accelerating its improvement rate, and I do believe its capabilities will continue growing exponentially.

    In particular, GPT-2 to GPT-4 spans an increase from 'well read toddler' to 'average high school student' in just a few years, while simultaneously the computational cost of training less capable models goes down similarly.

    Also worth noting: the article claims Stripe, another huge money raiser, had an obviously useful product. gdb, sometime-CTO of stripe and its fourth employee, is now president of OpenAI. And, most of all, the author doesn't remember how nonobvious Stripe's utility was in its early days, even in the tech scene: there were established ways to take people's money and it wasn't clear why Stripe had an offering worth switching to.

    For an alternate take, I think https://situational-awareness.ai provides a well reasoned argument for the current status of AI innovation and growth rate, and addresses all of the points here on a general (though not OpenAI specific) way.

  • by Animats on 8/1/24, 7:00 AM

    This question can be asked about several "unicorns" that need huge inputs of capital to keep going. Only the era of zero interest rates made them work.

    WeWork went bankrupt. Uber briefly made money but is losing it again, and is nowhere near paying back its investors. Tesla has become a major luxury car company, and is somewhat profitable, but the stock is way overpriced for a car company. Everybody now makes electric cars, so this is a low-margin business. (Reuters: "Tesla's bleak margins sink shares as Musk hypes everything but cars.")

    OpenAI, as a business, is assuming both that LLM-type AI will get much better very fast, and that everybody else won't be able to do what they do. It's unlikely that both of those assumptions hold. Look at autonomous vehicles. First tech demos (CMU) in the 1980s. First reasonably decent demos (DARPA Grand Challenge) in the 2000s. First successful deployment in the 2020s (Waymo, maybe Cruise and Zoox). Still not profitable. 40 years from first demos to deployment, probably 50 to profitability. It's entirely possible that OpenAI's business will look like that. Their burn rate is way too high to sustain for that long.

    Often it takes that long, even when the basics have been figured out. Xerography was first demoed in the late 1930s. The demo machine used to be in the lobby at Xerox PARC. Profitability came in the 1960s. By the late 1970s, everybody had the technology, and it was low-margin. Electronic digital computing goes back to IBM's 1940s pre-WWII electronic multiplier experiments, but didn't come down from insanely expensive price levels until the 1980s. Memory was a million dollars a megabyte as late as the mid-1970s. Color television was first demoed in 1928, and the first color CRT was developed in the 1940s. But mainstream adoption didn't come until 1966-1967.

  • by why_only_15 on 8/1/24, 4:20 AM

    If you believe this article, I would be happy to bet up to $10,000 that OpenAI will not collapse in the next 24 months. We could operationalize this by saying that OpenAI will employ at least as many people (or pay at least as much in salaries) in 24 months as it does now. The author of the article refused to take this bet when it was offered to him several times https://x.com/JeffLadish/status/1817999232105627722.
  • by mxwsn on 8/1/24, 4:12 AM

    This article is timely and pairs well with Sequoia's $600B question: https://www.sequoiacap.com/article/ais-600b-question/ calculated simply from NVidia run rate revenue, which is the cost that genAI companies are paying. Where's the profit?

    Meta's open source LLM stance makes things more spicy, making it challenging for anyone generate differentiated and lasting profit in the LLM space.

    At the current pace, the LLM bubble is poised to pop in a year or two - negative net revenue can't keep growing forever - barring a transformative, next-generation capability from closed-source AI companies that Meta can't replicate. All eyes on GPT-5.

  • by comp_throw7 on 8/1/24, 5:11 AM

    > Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.

    Like they already did in the last 2 years?

    > Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.

    Huh, what are these use-cases which no AI researcher thinks AI is capable of solving? Does the author not realize that many employees at the leading AI labs (including OpenAI) are explicitly trying to build ASI? I am so confused????????

    > Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.

    Why would they have to create new jobs? They just have to be good enough that OpenAI can charge enough money for them to be in the green.

    OpenAI already has a $3.4 billion ARR! Most of that is _not_ enterprise sales.

  • by ethbr1 on 8/1/24, 3:10 AM

    > Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.

    AT&T.

  • by jaygray0919 on 8/1/24, 12:21 PM

    Scott Galloway made his bones with similar WeWork analysis. Once Galloway exposed the non-sustainable WeWork economics, others picked up his work and added their own spin. While WeWork would have imploded without the Galloway analysis, Scott's articles were the tipping point.
  • by funfunfunction on 8/1/24, 3:09 AM

    > generative AI is a product with no mass-market utility.

    > I am neither an engineer nor an economist.

    clearly.

  • by fragmede on 8/1/24, 4:37 AM

    I think the thing that nobody saw coming was Facebook coming out with their own, that they spent hundreds millions creating, and then releasing the model of it, with a generous license.

    If someone had come out of a copy of Google in 2000, we'd be looking at a much different picture.

  • by Arainach on 8/1/24, 3:30 AM

    >Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.

    What does reducing costs by "a factor of thousands of percent" mean? It starts printing money? It costs 1/10 as much?

  • by linotype on 8/1/24, 6:25 AM

    > generative AI is a product with no mass-market utility - at least on the scale of truly revolutionary movements

    This line is absurd. I use it constantly. 4o reads my code and generates documentation and type annotations. It generates boilerplate code. It generates logos for projects. I review all of its outputs code wise and make the odd correction here or there. I use it to check over documents before I send them. It’s replaced Stack Overflow entirely in my workflow.

    I’m curious as to what’s above the author’s line for revolutionary.

  • by dekhn on 8/1/24, 5:55 PM

    OpenAI will survive by being absorbed into Microsoft as a capability- OpenAI employees will be MSFT badged, working under an MSFT SVP, to add functionality to existing products.
  • by tim333 on 8/1/24, 8:59 PM

    To comment a little on the economics:

    OpenAI has raised $11.3bn (source the article)

    Since partnering with Microsoft in 2019, Microsoft's valuation has gone from $0.7tn to $3.1bn, or an increase of $2.4tn, a lot of that on AI enthusiasm.

    Microsoft can sell some shares to fund OpenAI, 2.4tn being about 200x what they've put in.

    Sure the market bubble will pop at some stage but not by 200x. I'm skeptical of the they can't survive argument.

    Also I recall in the early days of Facebook, Google and Amazon people saying they lose money each year, the first two didn't have a monetization model, how will they get by? But of course they ended up some of the world's most profitable companies. With AI also you have to think a few years down the road when ASI's output may exceed the current global GDP ($100tn or so).

  • by undergod on 8/1/24, 4:42 AM

    No genius employees = mediocre outcome. Ilya left. Andrej left. Much of the talent at the top of the pyramid left. A company is literally its people.
  • by programjames on 8/1/24, 4:44 AM

    It's not that these AI companies need to raise $100 billion, it's that they can. They could go slower, working on custom hardware or better architectures to get 1000x cheaper training, but their models would only be as good as the ones from 1-2 years ago. Because foundation models are a winner-takes-all game, any individual company needs to spend as much as they can get, even if it means most investors will lose big. There are many companies working on hardware gains (e.g. Lightmatter, Groq), but they're not competing in the foundation model business.
  • by prng2021 on 8/1/24, 4:58 AM

    This is a nonsensical article to me.

    "I ultimately believe that OpenAI in its current form is untenable."

    Followed by a bunch of reasons why. Later they write:

    "What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail"

    What? Didn't they just explain 100 different reasons why they think think OpenAI will fail? There was also this:

    "To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI's continued existence is necessary to keep companies interested/invested in the industry at all."

    To be clear? So they are trying to separate OpenAI from gen AI. Then they throw in a hyphen and say, oh but without OpenAI, companies would stop spending time and money on gen AI. Ok, thank you for the..clarification.

    I stopped reading after that.

  • by tedsanders on 8/1/24, 6:17 AM

    Small correction:

    "GPT-4o Mini (OpenAI's "cheaper" model) already beaten in price by Anthropic's Claude Haiku model"

    GPT-4o Mini is presently cheaper than Claude 3 Haiku.

  • by JohnMakin on 8/1/24, 3:34 AM

    Will always give an upvote to ed zitron, who has a very anti tech/anti AI tilt but a pretty reasonable podcast called “better offline” I’ve quite been enjoying.
  • by deepnotderp on 8/1/24, 4:38 PM

    GPT-4 could’ve written this article better- hey there’s an use case right there!
  • by pseudosavant on 8/1/24, 4:39 AM

    They say no company has ever raised the kind of money OpenAI has, like they are at some kind of ceiling. Microsoft’s market cap, which is only equity, not including bonds (only $42B), is over $3T.
  • by coding123 on 8/1/24, 4:57 AM

    This whole thing reads like: google doesn't have a business model because anyone can do search. Well, how many years later, yeah the writers saying Google was DOA are actually DOA.
  • by jsnell on 8/1/24, 10:13 AM

    > However, I do have the ability to read publicly-available data,

    Maybe, but based on the egregious errors the author has made in previous articles, they probably don't have the ability to understand or reason about any of the data they read. Also note that despite what's implied by this statement, most of this article is not sourced, it's just the opinions of the author who admits they have no qualifications.

    I didn't read the entire gish gallop, but spot-checked a few paragraphs here and there. It's just the kind of innumerate tripe that you should expect from Zitron based on their past performance.

    > Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.

    You can't reduce the cost of anything by more than 100%. At that point it's free.

    But let's consider the author's own numbers: $4B in revenue, $4B in serving costs, $3B in training costs, $1.5B in payroll. To break even at the current revenue, OpenAI need to cut their serving costs and training costs by about 66% ($1.3B+$1B+$1.5B<$4B), not by "thousands of percent".

    > As a result, OpenAI's revenue might climb, but it's likely going to climb by reducing the cost of its services rather than its own operating costs.

    ... Sorry, what?

    Reducing operating costs does not increase revenue. And I don't know how the author thinks that reducing cost of services would not reduce operating costs.

    > OpenAI's only real options are to reduce costs or the price of its offerings. It has not succeeded in reducing costs so far, and reducing prices would only increase costs.

    Reducing prices does not increase costs.

    > I see no signs that the transformer-based architecture can do significantly more than it currently does.

    So, here's a prime example of the author basing the "analysis" on them personally "seeing no signs" of something they have no expertise to evaluate. There's no source for this claim, and it's pretty crucial for their conclusions that transformers have hit a wall.

    > While there may be ways to reduce the costs of transformer-based models, the level of cost-reduction would be unprecedented,

    But for a given quality of model, haven't the inference costs already gone down by like 90% this year?

    > particularly from companies like Google, which saw its emissions increase by 48% in the last five years thanks to AI.

    It should be pretty obvious to somebody who can read publicly available data that all of the increase over 5 years can't be attributed to AI.

  • by benreesman on 8/1/24, 4:14 AM

    Maybe GPT-5 or Claude 4 comes out tomorrow and pays down all the IOUs these people are operating on.

    As of today all of the evidence indicates the LLM paradigm is saturated.

    Why all the hand-wringing about hypotheticals? As of today this stuff is a failed experiment.

    Altman or Amodei coming up with the goods is a tail X-risk.

  • by JSDevOps on 8/1/24, 6:47 AM

    Maybe they pivot to blockchain…. lol
  • by xyz-t on 8/1/24, 3:34 AM

    [redacted]
  • by jumploops on 8/1/24, 5:07 AM

    > generative AI is a product with no mass-market utility

    Citation?

    LLMs bring the cost of writing software close to $0. We can finally live in a world of truly bespoke code.

    I, for one, welcome back the web of the early 2000s.

  • by cuddlyogre on 8/1/24, 3:32 AM

    I was assured that being skeptical about offloading the use of my brain to a third party that might not always be there was being a luddite.