by kaycebasques on 6/5/25, 4:10 PM with 126 comments
by CuriouslyC on 6/5/25, 6:35 PM
Being forced to sit and argue with a robot while it struggles and fails to produce a working output, while you have to rewrite the code at the end anyway, is incredibly demoralizing. This is the kind of activity that activates every single major cause of burnout at once.
But, at least in that scenario, the thing ultimately doesn’t work, so there’s a hope that after a very stressful six month pilot program, you can go to management with a pile of meticulously collected evidence, and shut the whole thing down."
The counterpoint to this is that _SOME_ people are able to achieve force multiplication (even at the highest levels of skill, it's not just a juniors-only phenomenon), and _THAT_ is what is driving management adoption mandates. They see that 2-4x increases in productivity are possible under the correct circumstances, and they're basically passing down mandates for the rank and file to get with the program and figure out how to reproduce those results, or find another job.
by the_mitsuhiko on 6/5/25, 4:51 PM
While I understand the fear, I don’t really share it. And if I where to go to the root of it, I think I really most take issue with this:
> My experiences of genAI are all extremely bad, but that is barely even anecdata. Their experiences are neutral-to-positive. Little scientific data exists. How to resolve this?
My experience is astonishingly positive. I would not have imagined how much of a help these tools have become. Deep research and similar tools alone have helped me navigate complex legal matters recently for my incorporation, they have uncovered really useful information that I just would not have found that quickly. First cursor, now Claude Code have really changed how I work. Especially since for the last month or so, I feel myself more and more in the position where I can do things while the machine works. It’s truly liberating and it gives me a lot of joy. So it’s not “neutral-to-positive” to me, it’s exhilarating.
And that extends particularly to this part:
> Despite this plethora of negative experiences, executives are aggressively mandating the use of AI. It looks like without such mandates, most people will not bother to use such tools, so the executives will need muscular policies to enforce its use.
When I was at Sentry the adoption of AI happened by ICs before the company even put money behind it. In fact my memory is that if anything only at the point where an exceeding number of AI invoices showed up from IC expenses did we realize how widespread adoption has been. This was grounds up. For my non techy friends it’s even tricker because some of them work in companies that outright try to prevent the adoption of AI, but they are paying for it themselves to help them with the work. Some of them pay for the expensive ChatGPT package even! None of this should be disregarded, but it stands in a crass contrast to what this post says.
That said, I understand where Glyph comes from and I appreciate that point. There is probably a lot of truth in the tail risks of all of that, and I share these. It just does not at all take away from my enjoyment and optimism at all.
by grafelic on 6/5/25, 7:43 PM
I am looking forward to my future career as a gardener (although with a smidge of sadness) when AI has sucked out all creativity, ingeniuity and enjoyment from my field of work.
by ednite on 6/5/25, 5:24 PM
Thoughtful post.
by empath75 on 6/5/25, 5:13 PM
Even though I've written a bunch of code, I don't really enjoy writing code. I enjoy building systems and solving problems. If I never have to write another line of code to do that, I couldn't be happier.
Other people have invested a lot of their time and their self image in becoming a good "computer programmer", and they are going to have a really hard time interacting with AI programmers or really any other kind of automation that removes the part of the job that they actually enjoy doing.
Really, it's not much difference between that and musicians that got mad at djs for just playing other people's music, who then got mad at djs that use ableton live instead of learning to beat match. Do you think the _process_ of making music is the important part, or the final sounds that are produced?
Just like DJ's and collage artists can be inventive and express ideas using work created by other people, people who use AI to code can still express creative ideas through how they use it and combine it and manipulate it.
by starkparker on 6/5/25, 4:52 PM
This has been my experience as well, especially since the only space I have to work with agents are on C++ projects where they flat-out spiral into an increasingly dire loop of creating memory leaks, identifying the memory leaks they created, and then creating more memory leaks to fix them.
There are probably some fields or languages where these agents are more effective-I've had more luck with small tasks in JS and Python for sure. But I've burned a full work week trying and falling to get Claude to add some missing destructors to a working but leaky C++ project.
At one point I let it run fully unattended on the repo in a VM for four hours with the goal of adding a destructor to a class. It added nearly 2k broken LOC that couldn't compile because its first fix added an undeclared destructor, and its response to that failure was to do the same thing to _every class in the project_. Every time saying "I now know what the problem is" as it then created a new problem.
If LLMs could just lack confidence in themselves for even a moment and state that they don't know what the problem is but it's willing to throw spaghetti at the wall to find out, I could respect that more than it marching through with absolute confidence that its garbage code did anything but break the project.
by lsy on 6/5/25, 7:10 PM
In addition there's an element of personality: One person might think it's more meaningful to compose a party invitation to friends personally, another asks ChatGPT to write it to make sure they don't miss anything.
As someone who is on the skeptic side, I of course don't appreciate the "holding it wrong" discourse, which I think is rude and dismissive of people and their right to choose how they do things. But at the end of the day it's just a counterweight to the corresponding discourse on the skeptic side which is essentially "gross, you used AI". What's worse though is that institutions are forcing us to use this type of tool without consent, either through mandates or through integrations without opt-outs. To me it doesn't meet the bar for a revolutionary technology if it has to be propagated forcibly or with social scolding.
by Noumenon72 on 6/5/25, 6:51 PM
The commit history[1] looks like a totally normal commit history on page 1, but I clicked farther down and found commits like
> Ask Claude to fix bug with token exchange callback when reusing refresh token.
> As explained in the readme, when we rotate the refresh token, we still allow the previous refresh token to be used again, in case the new token is lost due to an error. However, in the previous commit, we hadn't properly updated the wrapped key to handle this case.
> Claude also found that the behavior when `grantProps` was returend without `tokenProps` was confusing -- you'd expect the access token to have the new props, not the old. I agreed and had Claude update it.
> Claude Code transcript: https://claude-workerd-transcript.pages.dev/oauth-provider-t...
It seems no different than "run prettier on the code" or "re-run code gen". I don't think it's fair to pick on this specific repo if your objection is just "I don't like reviewing code written by a tool that can't improve, regardless of what the commits look like". I'd call that "misanthropomorphizing".
1. https://github.com/cloudflare/workers-oauth-provider/commits...
by firesteelrain on 6/5/25, 5:05 PM
- Plan my garden
- Identify plant issues
- Help with planting tips
- General research/better google
- Systems Engineering
- etc
Maybe the code generation isnt great but it is really good in a lot of areas.
by kentonv on 6/5/25, 8:28 PM
I'm certainly not going to tell anyone that they're wrong if they try AI and don't like it! But this guy... did not try it? He looked at a commit log, tried to imagine what my experience was like, and then decided he didn't like that? And then he wrote about it?
Folks, it's really not that hard to actually try it. There is no learning curve. You just run the terminal app in your repo and you ask it to do things. Please, I beg you, before you go write walls of text about how much you hate the thing, actually try it, so that you actually have some idea what you're talking about.
Six months ago, I myself imagined that I would hate AI-assisted coding! Then I tried it. I found out a lot of things that surprised me, and it turns out I don't hate it as much as I thought.
[0] https://github.com/cloudflare/workers-oauth-provider/commits... (link to oldest commits so you can browse in order; newer commits are not as interesting)
by AIorNot on 6/5/25, 7:34 PM
2. People mostly hate new things (human nature)
3. its fun watching nerds complain about AI after they spent the past 50 years automating other peoples careers.. turnabout is fair play? All the nerdsniping around GenAI and tech taking away from programming is funny.
4. Remember its a tool, not a replacement (yet), get used to it.. we're not supposed to be luddites. use it and optimize - its our job to convince managers and executives that we are masters over the tool and not vice versa
by wagwang on 6/5/25, 7:26 PM
by energy123 on 6/5/25, 6:29 PM
Well this is the issue. The author assumed that these tools require no skill to use properly.
by Game_Ender on 6/5/25, 5:10 PM
> I have woefully little experience with these tools.
> I do not want to be using the cloud versions of these models with their potentially hideous energy demands; I’d like to use a local model. But there is obviously not a nicely composed way to use local models like this.
> The models and tools that people are raving about are the big, expensive, harmful ones. If I proved to myself yet again that a small model with bad tools was unpleasant to use, I wouldn’t really be addressing my opponents’ views.
Then without having any real practical experience with the cutting edge tooling they predict:
> As I have written about before, I believe the mania will end. There will then be a crash, and a “winter”. But, as I may not have stressed sufficiently, this crash will be the biggest of its kind — so big, that it is arguably not of a kind at all. The level of investment in these technologies is bananas and the possibility that the investors will recoup their investment seems close to zero.
I think a more accurate take is this will be like self driving, huge investments, many more losers thank winners, and it will take longer than all the boosters think. But in the end we did get actual self driving cars, but this time it's with LLMs it is something that anyone can use by clicking a link vs. waiting for lots of cars to be built and deployed.
by malwrar on 6/5/25, 9:21 PM
> Energy Usage
This has always seemed like a fake problem to me, and mostly just a way to rally environmentalists around protesting genai (and bitcoin and…). The costs (time & capital) associated with nuclear power are mostly regulatory; in the past decade China has routinely built numerous modern reactors while the west has been actively scaling nuclear power down. I have yet to see a good case for not simply scaling up power generation to meet demand. This is assuming that AI power demands remain constant and even require gigawatt-scale power sources to train & operate. For all we know, one paper from one researcher anywhere in the world could show us that current model architectures are orders of magnitude inefficient and make current-standard frontier model training doable in a basement.
> The Educational Impact
A few years ago I came across some cool computer vision videos on youtube and became inspired to try reading the papers and understanding more. The papers were surprisingly short (only a few dozen pages usually), but contained jargon and special symbols that I found impossible to google. I eventually read more papers and even some books, and eventually discovered that if I had simply read e.g. Hartley & Zisserman’s Multiple View Geometry (commonly just referenced as “MVG”, another mystery I had to figure out) I would have had the hidden knowledge necessary to understand much of the math. If I had simply read earlier papers, I’d be familiar with the fact that much of the content is just slight additions on previous works rather than fully greenfield ideas. I’m not rich enough or successful enough to access the formal education that would have probably provided me the resources to learn this. Youtube videos from practitioners in the field felt more like insecure nerds gatekeeping their field with obtuse lessons. It took me about a year to produce anything useful without just using opencv. When I tried asking the same questions I had as a beginner, chatgpt was able to answer everything I was confused on, answer my dumb followup questions, and even produce demo code so I could explore the concepts. That experience has made it an irreplaceable educational tool for me. Im sure more people will cheat, but college is already an overvalued competence signal anyways.
> The Invasion of Privacy
Temporary problem, someone will produce a usable hardware equivalent that makes it similarly easy to use genai w/o the privacy implications. This is a segment begging for someone to provide a solution, I think most people with the resources currently to solve it just want to be the AI gatekeeer and thus the service model for this tech.
> The Stealing
We won’t find common ground, I dont believe in ownership of information and think copyright is largely immoral.
> The Fatigue
This section seems to largely revolve around no clear obvious tooling route to simply using the tech (complete empathy there, Im currently building my own tooling around it), along with not having a clear mental model of how these models work. I had the same anxiety when chatgpt first blew my mind and just decided to learn how they work. They’re dirt fucking simple, shitty non-learned computer vision systems are orders of magnitude more complex. You seem like a cool person, hmu my profile name @ gmail if you want to talk about it! I like 3b1b’s videos if you want a shorter version, karpathy is overhyped imo but also cool if you want to build a toy model line by line,
by vouaobrasil on 6/5/25, 5:05 PM
There is no such thing as ethical AI, because "voluntary" usually means voluntary without the participants really understanding what they are making, which is just another tool in the arms race of increasingly sophisticated AI models – which will largely be needed just to "one up" the other guy.
"Ethical" AI is like forced pit-fighting where we ask if we can find willing volunteers to fight for the death for a chance for their freedom. It's sickening.
by mjburgess on 6/5/25, 7:42 PM
Anti-inductive processes are those, like e.g., fraud, which change because you have measured them. Once a certain sort of fraud is made illegal and difficult, most move on to a different kind.
AI models are at-base memorisers, with a little structured generalisation around the region of memory. They are not structure learners who happen to memorize. This makes creating a theory of an AI model kinda impossible because the "representations" they learn are so-called "entangled", another way of saying: garbage. The AI model's representation of language does not match language structure, rather each representation is a mixture of what we can evidently see as several competing structural features. But the model has no access to this structure, because its literally not a property of the data but of the data generating process.
Now this seems like "a good theory" of AI in the monkish sense, but a problem arises: for each fragile boundary that benchmarking and testing shows, the AI companies collect data on these failures and retrain the model. This is, in a sense, anti-inductive fraud: companies seek to find out how they are found out, and extend the model to cover those regions.
But models never actually gain the structural capabilities their behaviour is taken to imply. So, of course, this would drive any theorist insane. By the time you've figured out gpt3.5's fakery, all the openai chatbot data theyve collected has made gpt4 -- and all paper has been covered over all the edges. Until your fingers are cut next time on the now dark fragile boundary of openai's deception.
I can give very good reasons, independent of any particular AI model why: the manifold hypothesis is wrong, AI models are memorizers + structure, why they dont find latent structure but combine statistical heurstics, why their representations are not "entangled" such that they can be "disentangled" but necessarily model-unknowable blends of expert-known scientific joints -- and so on.
But none of this knowledge can become useful in the face of a billion-dollar training company competing against me, anti-inductively, to apply any of this to any given model.
Perhaps then, I suppose, we can look at the behaviour these systems induce in their users and its downstream effects. This is what OP here does, throws hands up at a theory and says only: however this is working, it cannot be good.
This is, of course, what we do with fraud, cults, and many other systems of deception. We say: we're not going to argue with the conspiracy theoriest, they are specialists at coming up with ever more elbobate adaptions to the self-deception. Instead, we observe by proxy, that they behave pathological;y -- that they are broken in otehr ways.
I think that's a fair approach, and at least one that allows the author to proceed without a theory.
by triceratops on 6/5/25, 6:12 PM
(Sorry for the low-effort response. But it's kinda true)