by meetpateltech on 5/21/25, 5:53 PM with 110 comments
by pjmlp on 5/21/25, 7:45 PM
This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
by aaronharnly on 5/21/25, 8:41 PM
- AI Studio: "the fastest place to start building with the Gemini API"
- Firebase Studio: "Prototype, build, deploy, and run full-stack, AI apps quickly"
by benbreen on 5/21/25, 8:17 PM
As an experiment I just asked it to "recreate the early RPG game Pedit5 (https://en.wikipedia.org/wiki/Pedit5), but make it better, with a 1970s terminal aesthetic and use Imagen to dynamically generate relevant game artwork" and it did in fact make a playable, rogue-type RPG, but it has been stuck on "loading art" for the past minute as I try to do battle with a giant bat.
This kind of thing is going to be interesting for teaching. It will be a whole new category of assignment - "design a playable, interactive simulation of the 17th century spice trade, and explain your design choices in detail. Cite 6 relevant secondary sources" and that sort of thing. Ethan Mollick has been doing these types of experiments with LLMs for some time now and I think it's an underrated aspect of what they can be used for. I.e., no one is going to want to actually pay for or play a production version of my Gemini-made copy of Pedit5, but it opens up a new modality for student assignments, prototyping, and learning.
Doesn't do anything for the problem of AI-assisted cheating, which is still kind of a disaster for educators, but the possibilities for genuinely new types of assignments are at least now starting to come into focus.
by koakuma-chan on 5/21/25, 8:08 PM
Analyzing TypeScript Errors
I'm currently focused on the actions/user.ts file, where I've identified a few issues. Specifically, I need to address a "Cannot find name" error related to UpdateUserDetailsFormState, and confirm that the intended target is UpdateUserDetailsFormInput. There might be some other discrepancies in the file that I need to resolve.
Debugging the Import
I've been trying to pinpoint the Cannot find name 'UpdateUserDetailsFormState' error. The type is definitely defined and imported, but it's not being recognized in the updateUserDetails function's arguments. I'm suspecting a scoping issue with the import statement within actions/user.ts. I also need to verify that UpdateUserDetailsFormState is correctly defined with the fieldValues property as optional as per the schema.
by gexla on 5/21/25, 10:21 PM
by jasonjmcghee on 5/21/25, 9:07 PM
"Te harsh jolt of the cryopod cycling down rips you"
"ou carefully swing your legs out"
I find this really interesting that it's like 99% there, and the thing runs and executes, yet the copy has typos.
by vunderba on 5/21/25, 8:00 PM
by andrewstuart on 5/22/25, 2:06 AM
All the copying and pasting is killing me.
by UncleOxidant on 5/22/25, 12:32 AM
by lastdong on 5/25/25, 3:15 PM
by ed on 5/21/25, 10:47 PM
But be sure to connect Studio to Google Drive, or else you will lose all your progress.
by nprateem on 5/22/25, 3:47 AM
Sorry guys, yes, Claude is the best model, but your lack of support for structured responses left me no choice.
I had been using Claude in my Saas, but the API was so unreliable I'd frequently get overloaded responses.
So then I put in fallbacks to other providers. Gemini flash was pretty good for my needs (and significantly cheaper), but failed to follow the XML schema in the prompt that Claude could follow. Instead I could just give a pydantic schema to constrain it.
The trouble is the Anthropic APIs just don't support that. I tried using litellm to paper over the cracks but no joy. However, OpenAI does support pydantic.
So i was left with literally needing twice as many prompts to support Gemini and Anthropic, or dropping Anthropic and using Gemini with OpenAI as a fallback.
It's a no-brainer.
So you guys need to pull your fingers out and get with the programme. Claude being good but also more expensive and not being compatible with other APIs like this is costing you customers.
Shame, but so long for now...
by iandanforth on 5/22/25, 1:03 PM
by smusamashah on 5/21/25, 10:55 PM
by bionhoward on 5/22/25, 2:13 AM
What kind of smooth brain hears, “they train AI on your ideas and code, humans read your ideas and code, and you agree not to compete back against this thing a multi trillion dollar company just said can do everything, which competes with you,” and says yes? Oh, the smooth brain who doesn’t even realize, because it’s all buried in “additional” legal CYA documents.
ChatGPT still dominates the reach test since I can at least opt out of model training without losing chat logs, even though I have to agree not to compete with the thing that competes with me. Google is like a corporate version of a gross nerd you tolerate because they’re smart, even though they stalk you weirdly.
What a disgrace, we all ought to be sick about creating a legalese-infused black mirror dystopia, racing to replace our own minds with the latest slop-o-matic 9000 and benefit the overlords for the brief instant they count their money while the whole ecosphere is replaced by data centers
It’s like somehow the most magical tech in history (LLMs) comes along and gets completely shafted by elite grifter-tier scumbaggery. Chat bot speech is more protected than human speech these days, Google doesn’t give a crap about privacy, it’s all about the slimy hidden-opt-out of getting everything fed into the slop machine, and break everything if you do.
“Gee, should the app break if the user doesn’t want us to read their shit?” “Yeah, that sounds good, ain’t nobody got time to categorize data into two whole buckets, ship it!”
“How about we make a free better app that doesn’t even have the option of us not reading their shit?” “Oh, yeah, that’ll really help the KPIs!”
by dangoodmanUT on 5/21/25, 10:51 PM