by gregorywegory on 6/2/25, 2:24 PM with 530 comments
by rienbdj on 6/3/25, 6:30 AM
Look at this one:
> Ask Claude to remove the "backup" encryption key. Clearly it is still important to security-review Claude's code!
> prompt: I noticed you are storing a "backup" of the encryption key as `encryptionKeyJwk`. Doesn't this backup defeat the end-to-end encryption, because the key is available in the grant record without needing any token to unwrap it?
I don’t think a non-expert would even know what this means, let alone spot the issue and direct the model to fix it.
by paxys on 6/2/25, 3:04 PM
The million dollar (perhaps literally) question is – could @kentonv have written this library quicker by himself without any AI help?
by c-linkage on 6/2/25, 3:38 PM
I have tried to develop some code (typically non-web-based code) with LLMs but never seem to get very far before the hallucinations kick in and drive me mad. Given how many other people claim to have success, I figure maybe I'm just not writing the prompts correctly.
Getting a chance to see the prompts shows I'm not actually that far off.
Perhaps the LLMs don't work great for me because the problems I'm working on a somewhat obscure (currently reverse engineering SAP ABAP code to make a .NET implementation on data hosted in Snowflake) and often quite novel (I'm sure there is an OpenAuth implementation on gitbub somewhere from which the LLM can crib).
by mtlynch on 6/2/25, 3:07 PM
I'm confused by "I (@kentonv)" means here because kentonv is a different user.[0] Are you saying this is your alt? Or is this a typo/misunderstanding?
Edit: Figured out that most of your post is quoting the README. Consider using > and * characters to clarify.
by jes5199 on 6/2/25, 3:09 PM
1. I am much more productive/effective
2. It’s way more cognitively demanding than writing code the old-fashioned way
3. Even over this short timespan, the tools have improved significantly, amplifying both of the points above
by infinitebattery on 6/2/25, 3:11 PM
===
"Fix Claude's bug manually. Claude had a bug in the previous commit. I prompted it multiple times to fix the bug but it kept doing the wrong thing.
So this change is manually written by a human.
I also extended the README to discuss the OAuth 2.1 spec problem."
===
This is super relatable to my experience trying to use these AI tools. They can get halfway there and then struggle immensely.
by jauntywundrkind on 6/2/25, 3:35 PM
Direct link to earliest page of history: https://github.com/cloudflare/workers-oauth-provider/commits...
A lot of very explicit & clear prompting, with direct directions to go. Some examples on the first page: https://github.com/cloudflare/workers-oauth-provider/commit/... https://github.com/cloudflare/workers-oauth-provider/commit/...
by aeneas_ory on 6/3/25, 6:26 AM
In my view this is an antipattern of AI usage and „roll your own crypto“ reborn.
by simonw on 6/2/25, 6:34 PM
by declan_roberts on 6/2/25, 3:24 PM
by _tqr3 on 6/2/25, 3:25 PM
They’ll probably get better, but for now I can safely say I’ve spent more time building and tweaking prompts than getting helpful results.
by vaidhy on 6/3/25, 2:53 PM
When I am writing the code, my mind tracks what I have done and the new pieces flow. When I am reading code written by someone else, there is no flow.. I have to track individual pieces and go back and forth on what was done before.
I can see myself using LLMs for short snippets rather than start something top down.
by qsort on 6/2/25, 3:07 PM
by Luker88 on 6/3/25, 6:59 AM
Still, legal question where I'd like to be wrong: AFAIK (and IANAL) if I use AI to generate images, I can't attach copyright to it.
But the code here is clearly copyrighted to you.
Is that possible because you manually modify the code?
How does it work in examples like this one where you try to have close to all code generated by AI?
by kentonv on 6/2/25, 3:51 PM
I'm also the lead engineer and initial creator of the Cloudflare Workers platform.
--------------
Plug: This library is used as part of the Workers MCP framework. MCP is a protocol that allows you to make APIs available directly to AI agents, so that you can ask the AI to do stuff and it'll call the APIs. If you want to build a remote MCP server, Workers is a great way to do it! See:
https://blog.cloudflare.com/remote-model-context-protocol-se...
https://developers.cloudflare.com/agents/guides/remote-mcp-s...
--------------
OK, personal commentary.
As mentioned in the readme, I was a huge AI skeptic until this project. This changed my mind.
I had also long been rather afraid of the coming future where I mostly review AI-written code. As the lead engineer on Cloudflare Workers since its inception, I do a LOT of code reviews of regular old human-generated code, and it's a slog. Writing code has always been the fun part of the job for me, and so delegating that to AI did not sound like what I wanted.
But after actually trying it, I find it's quite different from reviewing human code. The biggest difference is the feedback loop is much shorter. I prompt the AI and it produces a result within seconds.
My experience is that this actually makes it feels more like I am authoring the code. It feels similarly fun to writing code by hand, except that the AI is exceptionally good at boilerplate and test-writing, which are exactly the parts I find boring. So... I actually like it.
With that said, there's definitely limits on what it can do. This OAuth library was a pretty perfect use case because it's a well-known standard implemented in a well-known language on a well-known platform, so I could pretty much just give it an API spec and it could do what a generative AI does: generate. On the other hand, I've so far found that AI is not very good at refactoring complex code. And a lot of my work on the Workers Runtime ends up being refactoring: any new feature requires a bunch of upfront refactoring to prepare the right abstractions. So I am still writing a lot of code by hand.
I do have to say though: The LLM understands code. I can't deny it. It is not a "stochastic parrot", it is not just repeating things it has seen elsewhere. It looks at the code, understands what it means, explains it to me mostly correctly, and then applies my directions to change it.
by tveita on 6/2/25, 10:32 PM
https://claude-workerd-transcript.pages.dev/oauth-provider-t... ("Total cost: $6.45")!
https://github.com/cloudflare/workers-oauth-provider/commit/...
https://github.com/cloudflare/workers-oauth-provider/commit/...
The first transcript includes the cost, would be interesting to know the ballpark of total Claude spend on this library so far.
--
This is opportune for me, as I've been looking for a description of AI workflows from people of some presumed competency. You'd think there would be many, but it's hard to find anything reliable amidst all the hype. Is anyone live coding anything but todo lists?
antirez: https://antirez.com/news/144#:~:text=Yesterday%20I%20needed%...
by multimoon on 6/2/25, 3:38 PM
Like anything else it will be a tool to speed up a task, but never do the task on its own without supervision or someone who can already do the task themselves, since at a minimum they have to already understand how the service is to work. You might be able to get by to make things like a basic website, but tools have existed to autogenerate stuff like that for a decade.
by mmaunder on 6/2/25, 8:39 PM
by weinzierl on 6/2/25, 6:26 PM
"the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked."
These two views are by no means mutually exclusive. I find LLMs extremely useful and still believe they are glorified Markov generators.
The take away should be that that is all you need and humans likely are nothing more than that.
by hattmall on 6/2/25, 3:04 PM
As an edit, after reading some of the prompts, what is the likelihood that a non-expert could even come up with those prompts?
The really really interesting thing would be if an AI could actually generate the prompts.
by freedomben on 6/2/25, 5:04 PM
> Cloudlflare builds OAuth with Claude and publishes all the prompts
by gcr on 6/4/25, 4:29 AM
The CVE is uncharacteristically scornful: https://nvd.nist.gov/vuln/detail/cve-2025-4143
I’m glad this was patched, but it is a bit worrying for something “not vibe coded” tbh
by eGQjxkKF6fif on 6/2/25, 11:23 PM
Congratulations Cloudflare, and thank you for showing that a pioneer, and leader in the internet security space can use the new methods of 'vibe coding' to build something that connects people in amazing ways, and that you can use these prompts, code, etc to help teach others to seek further in their exploration of programming developments.
Vibe programming has allowed me to break through depression and edit and code the way I know how to do; it is a helpful and very meaningful to me. I hope that, it can be meaningful for others.
I envision the current generation and future generations of people to utilize these things; but we need to accept, that this way of engineering, developing things, creation, is paving a new way for peoples.
Not a single comment in here is about people traumatized, broken, depressed, or have a legitimate reason for vibe coding.
These things assist us, as human beings; we need to be mindful that it isn't always about us. How can we utilize these things to to the betterment of the things we are passionate about? I humbly look forward to seeing how projects in the open source space can showcase not only developmental talent, but the ability to reason and use logic and project building thoughtfulness to use these tools to build.
Good job, Cloudflare.
by lapcat on 6/3/25, 1:03 PM
It feels infinitely worse than mentoring an inexperienced engineer, because Claude is inhuman. There's no personal relationship, it doesn't make human mistakes or achieve human successes, and if Claude happens to get better in the future, that's not because you personally taught it anything. And you certainly can't become friends.
They want to turn artists and craftsmen into assembly line supervisors.
by zeroq on 6/3/25, 3:53 AM
My latest try with Gemini went like this:
- Write me a simple todo app on CloudFlare with auth0 authentication.
- Let's proceed with a simple todo app on CloudFlare. We start by importing the @auth0-cloudflare and...
- Does that @auth0-cloudflare actually exists?
- Oh, it doesn't. I can give you a walkthrough on how to set up an account on auth0. Would you like me to?
- Yes, please.
- Here. I'm going to write the walkthrough in a document... (proceed to create an empty document)
- That seems to be an empty document.
- Oh, my bad. I'll produce it once more. (proceed to create another empty document)
- Seems like you're md parsing library is broken, can you write it in chat instead?
- Yes... (Your Gemini trial has expired. Would you like to pay $100 to continue?)
My idea was to try the new model with a low hanging fruit - as kentov mentioned, it's a very basic task that has been made thousand of times on the internet with extremely well documented APIs (officially and on reddit/stackoverflow/etc.).Sure, it was a short hike before my trial expired, and kentov himself admited it took him couple of days to put it together, but... holy cow.
by thih9 on 6/2/25, 3:12 PM
Which Claude plan did you use? Was it enough or did you feel limited by the quotas?
by alanfranz on 6/2/25, 3:38 PM
Question is: will this work for non-greenfield projects as well? Usually 95% of work in a lifetime is not greenfield.
Or will we throw away more and more code as we go, since AI will rewrite it, and we’ll probably introduce subtle bugs as we go?
by ookblah on 6/2/25, 4:20 PM
I do agree if you have no idea what you are doing or are still learning it could be a detriment, but like anything it's just a tool. I feel for junior devs and the future. Lazy coders get lazier, those who utilize them to the fullest extent get even better, just like with any tech.
by skybrian on 6/2/25, 3:05 PM
by wooque on 6/2/25, 4:36 PM
I'm surprised I took them more than 2 days to do that with AI.
by dang on 6/3/25, 3:39 AM
by abroadwin on 6/2/25, 3:01 PM
by EtienneK on 6/2/25, 6:05 PM
What is the "provider" side? OAuth 2.1 has no definition of a "provider". Is this for Clients? Resource Servers? Authorization Server?
Quickly skimming the rest of the README it seems this is for creating a mix of a Client and a Resource Server, but I could be mistaken.
> To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs
Experience with the RFCs but have not been able to correctly name it.
by cyberax on 6/3/25, 5:21 AM
And OAuth is not particularly hard to implement, I did that a bunch of times (for server and the client side). It's well-specified and so it fits well for LLMs.
So it's probably more like 2x acceleration for such code? Not bad at all!
by globular-toast on 6/2/25, 5:54 PM
by jplehmann on 6/4/25, 10:35 PM
by scherlock on 6/3/25, 2:31 AM
by bsder on 6/2/25, 7:43 PM
So, for those of us who are not OAuth experts, don't have a team of security engineers on call, and are likely to fall into all the security and compliance traps, how does this help?
I don't need AI to write my shitty code. I need AI to review and correct my shitty code.
by IncreasePosts on 6/2/25, 5:04 PM
by varispeed on 6/2/25, 6:48 PM
by _pdp_ on 6/2/25, 11:27 PM
by DJBunnies on 6/2/25, 3:46 PM
When Claude can do something new, then I think it will be impressive.
Otherwise it’s just piecing together existing examples.
by alienbaby on 6/3/25, 6:40 PM
Except, this time it wasn't. It got most things right first time, and fixed things I asked it to.
I was pleasantly surprised.
by topspin on 6/3/25, 2:42 AM
This time Claude fixed the problem, but:
- It also re-ordered some declarations, even though I told it not to. AFAICT they aren't changed, just reordered, and it also added some doc comments.
- It fixed an unrelated bug, which is that `getClient()` was marked `private` in `OAuthProvider` but was being called from inside `OAuthHelpers`. I hadn't noticed this before, but it was indeed a bug.
Frequently can't get LLMs to limit themselves to what has been prompted, and instead they run around and "best practice" everything, "fixing" unrelated issues, spewing commentary everywhere, and creating huge, unnecessary diffs.by ZiiS on 6/2/25, 3:40 PM
by kiitos on 6/4/25, 3:02 AM
It seems that way to me...
Certainly if I were on a hiring panel for anyone who had this kind of stuff in their Google search results, it would be a hard-no from me -- but what do i know?
by catigula on 6/2/25, 6:18 PM
It feels probably similarly from going from dumb or semi-dumb text editor to an IDE.
by zackify on 6/3/25, 2:47 AM
by ab_testing on 6/3/25, 5:17 AM
by paulddraper on 6/3/25, 4:28 PM
by vjerancrnjak on 6/2/25, 4:45 PM
Wonder how well incremental editing works with such a big file. I keep pushing for 1 file implementations, yet people split it up into bazillion files because it works better with AI.
by mehdibl on 6/2/25, 5:51 PM
But Claude don't allow yet to add APPS in their backend.
Mainly only closed beta for integration.
How you can configure an app to leverage correctly Oauth and have your own app secret ID/ Client ID!
by teaearlgraycold on 6/2/25, 3:38 PM
This sounds like coding but slower
by jonplackett on 6/2/25, 3:37 PM
by Bluestein on 6/3/25, 4:38 PM
> "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"
> "haha gpus go brrr"
by jbeus on 6/3/25, 1:05 AM
by rienbdj on 6/2/25, 8:05 PM
by throwaway314155 on 6/2/25, 6:41 PM
by hamdouni on 6/6/25, 5:49 AM
by jwally on 6/3/25, 11:26 AM
by caycep on 6/2/25, 8:00 PM
Also, maybe the humbling question is, maybe we humans aren't so exceptional if 90% of the sum of human knowledge can be predicted by next-word-prediction
by horacemorace on 6/2/25, 3:30 PM
by unshavedyak on 6/2/25, 2:59 PM
I don't actually enjoy it, i generally find it difficult to use as i have more trouble explaining what i want than actually just doing it. However it seems clear that this is not going away and to some degree it's "the future". I suspect it's better to learn the new tools of my craft than to be caught unaware.
With that said i still think we're in the infancy of actual tooling around this stuff though. I'm always interested to see novel UXs on this front.
by helsinki on 6/3/25, 4:28 AM
by stego-tech on 6/2/25, 3:27 PM
On the other hand, where I remain a skeptic is this constant banging-on that somehow this will translate into entirely new things - research, materials science, economies, inventions, etc - because that requires learning “in real time” from information sources you’re literally generating in that moment, not decades of Stack Overflow responses without context. That has been bandied about for years, with no evidence to show for it beyond specifically cherry-picked examples, often from highly-controlled environments.
I never doubted that, with competent engineers, these tools could be used to generate “new” code from past datasets. What I continue to doubt is the utility of these tools given their immense costs, both environmentally and socially.
by okthrowman283 on 6/3/25, 2:42 AM
by blibble on 6/2/25, 10:43 PM
so he's been convinced by it shitting out yet another javascript oauth library?
this experiment proves nothing re: novelty
by tonyhart7 on 6/2/25, 2:58 PM
another models feels like shit to use, but claude is good
by yapyap on 6/3/25, 6:28 AM
by csmpltn on 6/2/25, 9:45 PM
Why are you so surprised an LLM could regurgitate one back? I wouldn’t celebrate this example as a noteworthy achievement…
by bigcat12345678 on 6/3/25, 7:21 AM
by arrty88 on 6/3/25, 4:07 PM
by apwell23 on 6/2/25, 4:35 PM
by Phiality on 6/3/25, 7:42 PM
by pier25 on 6/2/25, 7:51 PM
by dboreham on 6/3/25, 2:42 AM
by keeda on 6/2/25, 9:55 PM
It’s now a year+ old and models have advanced radically, but most of the key points still hold, which I've summarized here. The post has way more details if you need. Many of these points have also been echoed by others like @simonw.
Background:
* The main project is specialized and "researchy" enough that there is no direct reference on the Internet. The core idea has been explored in academic literature, a couple of relevant proprietary products exist, but nobody is doing it the way I am.
* It has the advantage of being greenfield, but the drawback of being highly “prototype-y”, so some gnarly, hacky code and a ton of exploratory / one-off programs.
* Caveat: my usage of AI is actually very limited compared to power users (not even on agents yet!), and the true potential is likely far greater than what I've described.
Highlights:
* At least 30% and maybe > 50% of the code is AI-generated. Not only are autocompletes frequent, I do a lot of "chat-oriented" and interactive "pair programming", so precise attribution is hard. It has written large, decently complicated chunks of code.
* It does boilerplate extremely easily, but it also handles novel use-cases very well.
* It can refactor existing code decently well, but probably because I'ver worked to keep my code highly modular and functional, which greatly limits what needs to be in the context (which I often manage manually.) Errors for even pretty complicated requests are rare, especially with newer models.
Thoughts:
* AI has let me be productive – and even innovate! – despite having limited prior background in the domains involved. The vast majority of all innovation comes from combining and applying well-known concepts in new ways. My workflow is basically a "try an approach -> analyze results -> synthesize new approach" loop, which generates a lot of such unique combinations, and the AI handles those just fine. As @kentonv says in the comments, there is no doubt in my mind that these models “understand” code, as opposed to being stochastic parrots. Arguments about what constitutes "reasoning" are essentially philosophical at this point.
* While the technical ideas so far have come from me, AI now shows the potential to be inventive by itself. In a recent conversation ChatGPT reasoned out a novel algorithm and code for an atypical, vaguely-defined problem. (I could find no reference to either the problem or the solution online.) Unfortunately, it didn't work too well :-) I suspect, however, that if I go full agentic by giving it full access to the underlying data and letting it iterate, it might actually refine its idea until it works. The main hurdles right now are logistics and cost.
* It took me months to become productive with AI, having to find a workflow AND code structure that works well for me. I don’t think enough people have put in the effort to find out what works for them, and so you get these polarized discussions online. I implore everyone, find a sufficiently interesting personal project and spend a few weekends coding with AI. You owe it to yourself, because 1) it's free and 2)...
* Jobs are absolutely going to be impacted. Mostly entry-level and junior ones, but maybe even mid-level ones. Without AI, I would have needed a team of 3+ (including a domain expert) to do this work in the same time. All knowledge jobs rely on a mountain of donkey work, and the donkey is going the way of the dodo. The future will require people who uplevel themselves to the state of the art and push the envelope using these tools.
* How we create AI-capable senior professionals without junior apprentices is going to be a critical question for many industries. My preliminary take is that motivated apprentices should voluntarily eschew all AI use until they achieve a reasonable level of proficiency.
by revskill on 6/2/25, 3:56 PM
by gregorywegory on 6/2/25, 2:24 PM
"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"
"haha gpus go brrr"
In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.
To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.
Again, please check out the commit history -- especially early commits -- to understand how this went.
by JackSlateur on 6/3/25, 6:44 PM
It's like cooking with a toddler
The end result has a lower quality than your own potential, it takes more time to be producted, and it is harder too because you always need to supervise and correct what's done
by chrisweekly on 6/2/25, 2:58 PM
by Squeeeez on 6/2/25, 10:51 PM
by sceptic123 on 6/2/25, 3:38 PM