by mfiguiere on 6/10/25, 5:41 PM with 495 comments
by 34679 on 6/10/25, 11:41 PM
First, I tried enabling o3 via OpenRouter since I have credits with them already. I was met with the following:
"OpenAI requires bringing your own API key to use o3 over the API. Set up here: https://openrouter.ai/settings/integrations"
So I decided I would buy some API credits with my OpenAI account. I ponied up $20 and started Aider with my new API key set and o3 as the model. I get the following after sending a request:
"litellm.NotFoundError: OpenAIException - Your organization must be verified to use the model `o3`. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate."
At that point, the frustration was beginning to creep in. I returned to OpenAI and clicked on "Verify Organization". It turns out, "Verify Organization" actually means "Verify Personal Identity With Third Party" because I was given the following:
"To verify this organization, you’ll need to complete an identity check using our partner Persona."
Sigh I click "Start ID Check" and it opens a new tab for their "partner" Persona. The initial fine print says:
"By filling the checkbox below, you consent to Persona, OpenAI’s vendor, collecting, using, and utilizing its service providers to process your biometric information to verify your identity, identify fraud, and conduct quality assurance for Persona’s platform in accordance with its Privacy Policy and OpenAI’s privacy policy. Your biometric information will be stored for no more than 1 year."
OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"
by sschueller on 6/10/25, 7:43 PM
I don't see this happening with for example deepseek.
Is it possible they are saving on resources by having it answer that way?
by mythz on 6/11/25, 7:42 AM
Although I've always wondered how OpenAI could get away with o3's astronomical pricing, what does o3 do better than any other model to justify their premium cost?
by lvl155 on 6/10/25, 6:30 PM
by behnamoh on 6/10/25, 6:07 PM
I have a suspicion that's how they were able to get gpt-4-turbo so fast. In practice, I found it inferior to the original GPT-4 but the company probably benchmaxxed the hell out of the turbo and 4o versions so even though they were worse models, users found them more pleasing.
by BeetleB on 6/10/25, 8:19 PM
by lxgr on 6/10/25, 6:33 PM
In my experience, o4-mini and o4-mini-high are far behind o3 in utility, but since I’m rate-limited for the latter, I end up primarily using the former, which has kind of reinforced the perception that OpenAI’s thinking models are behind the competition altogether.
by mrcwinn on 6/11/25, 2:32 AM
by coffeecoders on 6/10/25, 6:34 PM
Just yesterday, they reported an annualized revenue run rate of 10B. Their last funding round in March valued them at 300B. Despite losing 5B last year, they are growing really fast - 30x revenue with over 500M active users.
It reminds me a lot of Uber in its earlier years—fast growth, heavy investment, but edging closer to profitability.
by blueblisters on 6/10/25, 6:49 PM
They’re not letting the competition breathe
by JojoFatsani on 6/10/25, 9:15 PM
by ramesh31 on 6/10/25, 7:08 PM
by seydor on 6/10/25, 6:55 PM
by ninetyninenine on 6/10/25, 6:18 PM
by ilaksh on 6/10/25, 5:55 PM
by OutOfHere on 6/10/25, 8:24 PM
by ucha on 6/11/25, 6:11 PM
On twitter, some people say that some models perform better at night when there is a less demand which allows them to serve a non-quantized model.
Since the models are only available through API and there is no test to check which version of the model is served, it's hard to know what we're buying...
by monster_truck on 6/11/25, 12:01 PM
by stevev on 6/11/25, 12:23 AM
by sagarpatil on 6/11/25, 4:25 AM
by visiondude on 6/10/25, 6:08 PM
by MallocVoidstar on 6/10/25, 6:00 PM
> We’ll post to @openaidevs once the new pricing is in full effect. In $10… 9… 8…
There is also speculation that they are only dropping the input price, not the output price (which includes the reasoning tokens).
by teaearlgraycold on 6/10/25, 6:04 PM
by minimaxir on 6/10/25, 5:45 PM
I wonder if "we quantized it lol" would classify as false advertising for modern LLMs.
by nikcub on 6/10/25, 8:08 PM
by alliao on 6/10/25, 8:33 PM
by candiddevmike on 6/10/25, 6:15 PM
by maxcomperatore on 6/11/25, 4:17 PM
by biophysboy on 6/10/25, 6:28 PM
That said, I'm absolutely willing to hear people out on "value-adds" I am missing out on; I'm not a knee-jerk hater (For context, I work with large, complex & private databases/platforms, so its not really possible for me to do anything but ask for scripting suggestions).
Also, I am 100% expecting a sad day when I'll be forced to subscribe, unless I want to read dick pill ads shoehorned in to the answers (looking at you, YouTube). I do worry about getting dependent on this tool and watching it become enshittified.
by boyka on 6/11/25, 6:00 AM
by unraveller on 6/10/25, 6:48 PM
by godelski on 6/10/25, 8:57 PM
Yesterday: Today
------------- -------------
Price Price
Input: Input:
$10.00 / 1M tokens $2.00 / 1M tokens
Cached input: Cached input:
$2.50 / 1M tokens $0.50 / 1M tokens
Output: Output:
$40.00 / 1M tokens $8.00 / 1M tokens
https://archive.is/20250610154009/https://openai.com/api/pri...by koakuma-chan on 6/10/25, 6:24 PM
by polskibus on 6/10/25, 7:50 PM
by madebywelch on 6/10/25, 7:20 PM