by claytonwramsey on 5/4/25, 7:17 PM with 839 comments
by necovek on 5/4/25, 8:43 PM
by bost-ty on 5/4/25, 7:38 PM
In my experiments with LLMs for writing code, I find that the code is objectively garbage if my prompt is garbage. If I don't know what I want, if I don't have any ideas, and I don't have a structure or plan, that's the sort of code I get out.
I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done, as I haven't tried using any models lately for anything beyond helping me punch through boilerplate/scaffolding on personal programming projects.
by jsheard on 5/4/25, 8:55 PM
by EigenLord on 5/5/25, 3:40 AM
by laurentlb on 5/4/25, 9:33 PM
The issue, IMO, is that some people throw in a one-shot, short prompt, and get a generic, boring output. "Garbage in, generic out."
Here's how I actually use LLMs:
- To dump my thoughts and get help organizing them.
- To get feedback on phrasing and transitions (I'm not a native speaker).
- To improve tone, style (while trying to keep it personal!), or just to simplify messy sentences.
- To identify issues, missing information, etc. in my text.
It’s usually an iterative process, and the combined prompt length ends up longer than the final result. And I incorporate the feedback manually.
So sure, if someone types "write a blog post about X" and hits go, the prompt is more interesting than the output. But when there are five rounds of edits and context, would you really rather read all the prompts and drafts instead of the final version?
(if you do: https://chatgpt.com/share/6817dd19-4604-800b-95ee-f2dd05add4...)
by kouru225 on 5/5/25, 12:56 AM
In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.
In general, I’d say that AI is way more useful to compress complex ideas into simple ones than to expand simplistic ideas in to complex ones.
by Ancalagon on 5/4/25, 7:47 PM
The world will be consumed by AI.
by Animats on 5/4/25, 8:21 PM
To actually teach this, you do something like this:
"Here's a little dummy robot arm made out of Tinkertoys. There are three angular joints, a rotating base, a shoulder, and an elbow. Each one has a protractor so you can see the angle.
1. Figure out where the end of the arm will be based on those three angles. Those are Euler angles in action. This isn't too hard.
2. Figure out what the angles should be to touch a specific point on the table. For this robot geometry, there's a simple solution, for which look up "two link kinematics". You don't have to derive it, just be able to work out how to get the arm where you want it. Is the solution unambiguous? (Hint: there may be more than one solution, but not a large number.)
3. Extra credit. Add another link to the robot, a wrist. Now figure out what the angles should be to touch a specific point on the table. Three joints are a lot harder than two joints. There are infinitely many solutions. Look up "N-link kinematics". Come up with a simple solution that works, but don't try too hard to make it optimal. That's for the optimal controls course.
This will give some real understanding of the problems of doing this.
by andy99 on 5/4/25, 7:39 PM
As long as LLM output is what it is, there is little threat of it actually being competitive on assignments. If students are attentive enough to paraphrase it into their own voice I'd call it a win; if they just submit the crap that some data labeling outsourcer has RLHF'd into a LLM, I'd just mark it zero.
by sn9 on 5/4/25, 10:42 PM
It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this, especially in education and academia.
They see work as the mere production of output, without ever thinking about how that work builds knowledge and skills and experience.
Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives letting LLMs pull them along as they cheat themselves out of an education, sometimes spending hundreds of thousands of dollars to let their brains atrophy only to get a piece of paper and face the real world where problems get massively more open-ended and LLMs massively decline in meeting the required quality of problem solving.
Anyone who actually struggles to solve problems and learn themselves is going to have massive advantages in the long term.
by blintz on 5/5/25, 1:39 AM
"No worthy use of an LLM involves other human beings reading its output."
If you use a model to generate code, let it be code nobody has to read: one-off scripts, demos, etc. If you want an LLM to prove a theorem, have it generate some Coq and then verify the proof mechanically. If you ask a model to write you a poem, enjoy the poem, and then graciously erase it.
by ineptech on 5/4/25, 9:07 PM
> Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...
Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."
Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.
by derefr on 5/4/25, 7:57 PM
No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.
You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)
People just don't ever bother to do this.
by Workaccount2 on 5/4/25, 10:19 PM
by jjani on 5/5/25, 9:52 AM
I'm going to call out what I see as the elephant in the room.
This is brand new technology and 99% of people are still pretty clueless at properly using it. This is completely normal and expected. It's like the early days of the personal computer. Or Geocities and <blink> tags and under construction images.
Even in those days, incredible things were already possible by those who knew how to achieve them. The end result didn't have to be blinking text and auto-playing music. But for 99% it was.
Similarly, with current LLMs, it's already more than possible to use them in effective ways, without obscuring meaning or adding superfluous nonsense. In ways whose results have none of the author's criticisms apply. People just don't know how to do it yet. Many never will, just like many never learnt how to actually use a PC past Word and Excel. But many others will learn.
by Krisando on 5/5/25, 2:49 PM
I've used LLM before to document command-line tools and APIs I've made; they aren't the final product since I also tweaked the writing and fixed misunderstandings from the LLM. I don't think the author would appreciate the original prompts, where I essentially just dump a lot of code and give instructions in bullet point form on what to output.
These generated documentation are immensely useful, and I use them all the time for myself. I prefer the documentation to reading the code because finding what I need at a glance is not trivial nor is remembering all the conditions, prerequisites, etc.
That being said, the article seems to focus on a use case where LLM is ill-suited. It's not suited for writing papers to pretend you wrote a paper.
> I say this because I believe that your original thoughts are far more interesting
Looking at the example posted, I'm not convinced that most people's original thoughts on gimbal lock will be more interesting than a succinct summary by an LLM.
by YmiYugy on 5/4/25, 9:55 PM
by pasquinelli on 5/5/25, 3:01 PM
> I believe that the main reason a human should write is to communicate original thoughts.
in fairness to the students, how does the above apply to school work?
why does a student write, anyway? to pass an assignment, which has nothing to do with communicating original thoughts-- and whose fault is that, really?
education is a lot of paperwork to get certified in the hopes you'll get a job. it's as bereft of intelectual life as the civil service examinations in imperial china. original thought doesn't enter the frame.
by oncallthrow on 5/4/25, 8:30 PM
The most obvious ChatGPT cheating, like that mentioned in this article, is pretty easy to detect.
However, a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect.
I think if I was in the teaching profession I'd just leave, to be honest. The joy of reviewing student work will inevitably be ruined by this: there is 0 way of telling if the work is real or not, at which point why bother?
by rocqua on 5/5/25, 5:53 AM
For example if you already have a theory of your code, and you want to make some stuff that is verbose but trivial. It is just more efficient to explain the theory to an LLM and extract the code. I do like the idea of storing the underlying prompt in a comment.
Same for writing. If you truly copy paste output, it's obviously bad. But if you workshop a paragraph 5 or 6 times that can really get you unstuck.
Even the euler angles example. That output would be a good starting point for an investigation.
by internet_points on 5/5/25, 7:23 AM
This so much. A writing exercise sharpens your mind, it forces you to think clearly through problems, gives you practice in both letting your thoughts flow onto paper, and in post-editing those thoughts into a coherent structure that communicates better. You can throw it away afterwards, you'll still be a better writer and thinker than before the exercise.
by robertlagrant on 5/4/25, 8:05 PM
by Terr_ on 5/5/25, 12:51 AM
Yeah, to recycle a comment [0] from a few months back:
> Yeah, one of their most "effective" uses is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight." [...]we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
In other words, when the presentation means nothing, why bother?
by rralian on 5/4/25, 10:05 PM
by mightyham on 5/5/25, 12:22 PM
by ijidak on 5/5/25, 4:40 AM
Pre-AI, homework was often copied and then individuals just crammed for the tests.
AI is not the problem for these students, it's that many students are only in it for the diploma.
If it wasn't AI it would just be copying the assignment from a classmate or previous grad.
And I imagine the students who really want to learn are still learning because they didn't cheat then, and they aren't letting AI do the thinking for them now.
by theturtletalks on 5/4/25, 11:07 PM
The question is: Should we limit AI to keep the old way of learning, or use AI to make the process better? Instead of fixing small errors like grammar, students can focus on bigger ideas like making arguments clearer or connecting with readers. We need to teach students to use AI for deeper thinking by asking better questions.
We need to teach students that asking the right questions is key. By teaching students to question well, we can help them use AI to improve their work in smarter ways. The goal isn’t to go back to old methods for iterating but change how we iterate altogether.
by zahlman on 5/5/25, 3:30 AM
> You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT. They are invariably
This is validating. Your imitation completely fooled me (I thought it really was ChatGPT and expected to be told as much in an entirely unsurprising "reveal") and the subsequent description of the style is very much in agreement with how I'd characterize it.
In previous discussions here, people have tried to convince me that I can't actually notice these obvious signs, or that I'm not justified in detecting LLM output this way. Well, it may be the case that all these quirks derive from the definitely-human training data in some way, but that really doesn't make them Turing-test-passing. I can remember a few times that other people showed me LLM prose they thought was very impressive and I was... very much not impressed.
> When someone comments under a Reddit post with a computer-generated summary of the original text, I honestly believe that everyone in the world would be better off had they not done so. Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless. In essence, writing such a comment wastes everyone’s time.
I think you've overlooked some meta-level value here. By supplying such a comment, one signals that the article is vapid to other readers who might otherwise have to waste time reading a considerable part of the article to come to that conclusion. But while it isn't as direct as saying "this article is utterly vapid", it's more socially acceptable, and also more credible than a bald assertion.
by Arch-TK on 5/5/25, 12:29 PM
I believe that it has improved my writing productivity somewhat, especially when I'm tired and not completely on the ball. Although I don't usually reach for this most of the time (e.g. not for this comment).
by baalimago on 5/5/25, 7:49 AM
Using LLMs to achieve this is just another step in the evolution of a broken education system. The fix? IMO, make the exams for the courses delayed by one semester. So during the exam study-period, the students have to 'catch up' on the lectures they had a few months ago.
by Noumenon72 on 5/4/25, 7:56 PM
Maybe the problem is that the professor doesn't want to read the student work anyway, since it's all stuff he already knows. If they managed to use their prompts to generate interesting things, he'd stop wanting to see the prompts.
by nitwit005 on 5/5/25, 1:49 AM
The hardest hit industry by AI has been essay writing services.
If anything, it seems they're noticing because the AI is doing a worse job.
by FinnLobsien on 5/4/25, 8:18 PM
I agree with the broader point of the article in principle. We should be writing to edify ourselves and take education seriously because of how deep interaction with the subject matter will transform us.
But in reality, the mindset the author cites is more common. Most accounting majors probably don't have a deep passion for GAAP, but they believe accounting degrees get good jobs.
And when your degree is utilitarian like that, it just becomes a problem of minimizing time spent to obtain the reward.
by sizzzzlerz on 5/5/25, 2:48 PM
Whether it be writing or computer programming, or exercising, for that matter, if you aren't willing to put in the work to achieve your goals, why bother?
by cranium on 5/5/25, 10:16 AM
At first, I thought they didn't care. However, it was so pervasive that it couldn't be the only explanation. I was forced to conclude they trusted ChatGPT more than themselves to argue their case... (Some students did not care, obviously.)
by jjaksic on 5/5/25, 5:47 PM
It can be used as a personal tutor. How awesome is it to have a tutor always available to answer almost any question from any angle to really help you understand? Yes, AI won't get everything right 100%, but for students who are still learning basics, it's fair to assume that having an AI tutor can yield far better results than having no tutor at all.
It can also be used as a tool for doing mundane work, so you can focus more on the interesting and creative work. Kind of like a calculator or a spreadsheet. Would math majors become better mathematicians if they had to do all calculations by hand?
I think instead of banning AI, education needs to reform. Teaching staff should focus less time on giving lectures and grading papers (those things can be recorded and automated) and more time on ORAL EXAMS where they really probe student's knowledge and there's no possibility of cheating.
Students can and should use AI to help them prepare. E.g. don't ask AI to write an essay for you, write it yourself and ask it to critique it. Don't ask it to give you answers for a test, ask it to ask you questions on the topic and find gaps in your knowledge. Etc.
by tomjen3 on 5/4/25, 8:25 PM
The is especially the case when you are about to complain about style, since that can easily be adjusted, by simply telling the model what you want.
But I think there is a final point that the author is also wrong about, but that is far more interesting: why we write. Personally I write for 3 reasons: to remember, to share and to structure my thoughts.
If an LLM is better then me at writing (and it is) then there is no reason for me to write to communicate - it is not only slower, it is counterproductive.
If the AI is better at wrangling my ideas into some coherent thread, then there is no reason for me to do it. This one I am least convinced about.
AI is already much better than me at strictly remembering, but computers have been that since forever, the issue is mostly convinient input/output. AIs makes this easier thanks to speech to text input.
[0]: See eg. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the....
by neilv on 5/4/25, 9:25 PM
The school should be drilling into students, at orientation, what some school-wide hard rules are regarding AI.
One of the hard rules is probably that you have to write your own text and code, never copy&paste. (And on occasions when copy&paste is appropriate, like in a quote, or to reuse an off-the-shelf function, it's always cited/credited clearly and unambiguously.)
And no instructors should be contradicting those hard rules.
(That one instructor who tells the class on the first day, "I don't care if you copy&paste from AI for your assignments, as if it's your own work; that just means you went through the learning exercise of interacting with AI, which is what I care about"... is confusing the students, for all their other classes.)
Much of society is telling students that everything is BS, and that their job is to churn BS to get what they want. Early "AI' usage popular practices so far looks to be accelerating that. Schools should be dropping a brick wall in front of that. Well, a padded wall, for the students who can still be saved.
by neogodless on 5/5/25, 7:21 PM
I wish it was only at stoplights. But then just a few days ago, I witnessed a totally unnecessary accident. Left-lane got green, and someone in the straight lane noticed the movement but didn't look up and drove right into the car in front of them...
by llsf on 5/5/25, 7:39 PM
I almost had headaches after intense thinking of problems and ways to solve them in lambda prolog. That was the most interesting and satisfying to physically feel the effect of high focus combined with applying what was a new logic.
Computer science at the university, taught me how to learn and explore new ideas. I might sound like my grandpa who told me when I was 8yo that using calculator would lead to people not able to count... and here I am saying that LLM might lead to people who do not know how to write.
Actually, I am a bit concern that we might produce more text in the short term because it is becoming cheap to write tons of documentation with LLMs. But those feel like death by Terms and Conditions, i.e. text that no one reads. So not only we would lose our ability to write, but we can seriously affect our ability to read. Sure LLM can summarize as well, but then we lose the nuances.
Nature is lazy, but should we be lazy and delegate our ability to think (read/write), to a software ? Think about it :)
by tabbott on 5/4/25, 11:17 PM
Even when no errors are introduced in the process, the outcome is always bad: 3 full paragraphs of text with bullets and everything where the actual information is just the original 1-2 sentences that the model was prompted with.
I never am happy reading one of those; it's just a waste of time. A lot of the folks doing it are not native English speakers. But for their use case, older tools like Grammarly that help improve the English writing are effective without the problematic decompression downsides of this class of LLM use.
Regardless of how much LLMs can be an impactful tool for someone who knows how to use one well, definitely one of the impacts of LLMs on society today is that a lot of people think that they can improve their work by having an LLM edit it, and are very wrong.
(Sometimes, just telling the LLM to be concise can improve the output considerably. But clearly many people using LLMs think the overly verbose style it produces is good.)
by TeMPOraL on 5/4/25, 8:20 PM
EDIT: Not a jab at the author per se, more that it's a third or fourth time I see this particular argument in the last few weeks, and I don't recall seeing it even once before.
by ruuda on 5/5/25, 5:37 AM
by psychoslave on 5/5/25, 6:28 AM
More than communicate, I would say to induce thoughts.
I write poetry here and there (on paper, just for me). I like how exploration through lexical and syntactic spaces can be intertwined with semantics and pragnatic matters. More importantly, I appreciate how careful thoughts are playing with attention and other uncharted thoughts. The invisible side effects on mental structures happening in the creation of expression can largely outweight the importance of what is left as an artefact publicly visible.
For a far more trivial example, we can think about how notes in the margin of a book can radically change the way we engage with the reading. Even a careful spare word highlight can be a world of difference in how we engage with the topic. It's the very opposite of "reading" a few pages before realizing that not a single thought percolated into consciousness as it was wandering on something else.
by ctkhn on 5/4/25, 9:05 PM
AI usage is a lot higher in my work experience among people who no longer code and are now in business/management roles or engineers who are very new and didn't study engineering. My manager and skip level both use it for all sorts of things that seem pointless and the bootcamp/nontraditional engineers use it heavily. Our college hires we have who went through a CS program don't use it because they are better and faster than it for most tasks. I haven't found it to be useful without an enormous prompt at which point I'd rather just implement the feature myself.
by cryptoegorophy on 5/4/25, 9:20 PM
by skerit on 5/5/25, 7:59 AM
I don't understand this either. I use it a lot, but I never just use what an LLM says verbatim. It's so incredibly obvious it's not written by a human. Most of the time I write an initial draft, ask Claude to check it and improve it, and then I might touch up a few sentences here and there.
> Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless.
Maybe I still don't know what vibe coding is, but for the few times when I _can_ use an LLM to write code for me, I write a pretty elaborate instruction on what I want, how it should be written, ... Most of the time I use it for writing things I know it can do and seem tedious to me.
by YmiYugy on 5/4/25, 10:17 PM
I have to admit I was a bit surprised how bad LLMs are at the continue this essay task. When I read it in the blog I suspected this might have been a problem with the prompt or the using one of the smaller variants of Gemini. So I tried it with Gemini 2.5 Pro and iterated quite a bit providing generic feedback without offering solutions. I could not get the model to form a coherent well reasoned argument. Maybe I need to recalibrate my expectations of what LLMs are capable, but I also suspect that current models have heavy guardrails, use a low temperature and have been specifically tuned for problem solving and avoid hallucinations as much as possible.
by CivBase on 5/5/25, 12:17 PM
IMO the core problem is that in many cases this typical belief holds true.
I went to university to get a degree for a particular field of jobs. I'd generously estimate that about half of my classes actually applied to that field or jobs. The other half were required to make me a more "well rounded student" or something like that. But of course they were just fluff to maximize my tuition fees.
There was no university that offered a more affordable program without the fluff. After all, the fluff is a core part of the business model. But there isn't much economic opportunity without a diploma so students optimize around the fluff.
by articsputnik on 5/6/25, 10:09 AM
I couldn’t agree more with the sentiment of this article.
Writing yourself, _writing manually_ is much nicer, to hear your unfiltered thoughts, than condensing them through an LLM, and get average-sounding sentences with no soul. To me, LLM writing is soulless. I even started to turn to Grammarly and Copilot, as these were a mere distraction to the actual task at hand: writing. Instead of writing, I was constantly grammar fixing, and ultimately, nothing got done. I love the gym-analogy https://news.ycombinator.com/item?id=43888803 gave.
by satisfice on 5/4/25, 9:14 PM
by robwwilliams on 5/4/25, 9:21 PM
by Tteriffic on 5/4/25, 7:38 PM
by tptacek on 5/4/25, 8:36 PM
As always, I reject wholeheartedly what this skeptical article has to say about LLMs and programming. It takes the (common) perspective of "vibe coders", people who literally don't care what code says as long as something that runs comes out the other side. But smart, professional programmers use LLMs in different ways; in particular, they review and demand alterations to the output, the same way you would doing code review on a team.
by scarface_74 on 5/4/25, 7:53 PM
Yes I know the subject area for which I write assessments and know if what is generated is factually correct. If I’m not sure, I ask for web references using the web search tool.
https://chatgpt.com/share/6817c46d-0728-8010-a83d-609fe547c1...
by bobdosherman on 5/5/25, 1:48 PM
by austin-cheney on 5/5/25, 3:27 PM
To play devil's advocate original code alienates you from many programming jobs. This was true before LLMs, and remains true now. Many developers abhor original code. They need frameworks or packages from Maven, NPM, pip, or whatever. They need to be told exactly what to do in the code, but copy/paste is better, and a package that already does it for you is better still. In these jobs, yes, absolutely let a computer write it for you (or at least anybody that is an untrusted outside stranger). Writing the code yourself will often alienate you from your peers and violate some internal process.
by nathants on 5/4/25, 8:23 PM
if you can one-shot an answer to some problem, the problem is not interesting.
the result is necessary, but not sufficient. how did you get there? how did you iterate? what were the twists and turns? what was the pacing? what was the vibe?
no matter if with encyclopedia, google, or ai, the medium is the message. the medium is you interacting with the tools at your disposal.
record that as a video with obs, and submit it along with the result.
for high stakes environments, add facecam and other information sources.
reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.
by myth2018 on 5/5/25, 12:39 AM
That part caught my attention. As an English-as-a-second-language speaker myself, I find it so difficult to develop any form of "taste" in English the same way I have in my mother tongue. A badly written sentence in my mother tongue feels painful in a sort of physical way, while bad English usually sound OK to me, especially when asserted in the confident tone LLMs are trained in. I wish I could find a way to develop such sense for the foreign languages I currently use.
by junto on 5/5/25, 5:39 AM
Libraries are still in every campus, often with internet access.
Traditional media have transitioned to become online content media farms. The NYT Crossword puzzle is now online. Millions of people do Wordle every day online.
This is just kickback. Every paradigm shift needs kickback in order to let the dust settle and for society to readjust and find equilibrium again.
by jez on 5/4/25, 9:39 PM
People say “I saved so much time on perf this year with the aid of ChatGPT,” but ChatGPT doesn’t know anything about your working relationship with your coworker… everything interesting is contained in the prompt. If you’re brain dumping bullet points into an LLM prompt, just make those bullets your feedback and be done with it? Then it’ll be clear what the kernel of feedback is and what’s useless fluff.
by zhyder on 5/5/25, 3:45 PM
by hi_hi on 5/5/25, 10:24 AM
How about an emoji like library designed exclusively for LLMs, so we can quickly condense context and mood without having to write a bunch of paragraphs, or the next iteration of "txt" speech for LLMs. What does the next step of users optimising for LLMs look like?
I miss the 80's/90's :-(
by qwertox on 5/5/25, 3:43 PM
Am I alone with this?
by zjp on 5/5/25, 6:43 AM
by spatchcock on 5/5/25, 2:47 PM
by RicoElectrico on 5/4/25, 10:02 PM
There's so much bad writing of valuable information out there. The major sins being: burying the lede, no or poor sectioning, and just generally verbose.
In some cases, like in EULAs and patents that's intentional.
by markusde on 5/5/25, 2:20 AM
The punchline? Bullet point 3 was wrong (it was a PL assignment and I'm 99% sure the AI was picking up on the word macro and regurgitating facts abut LISP). 0 points all around, better luck next time.
by afavour on 5/4/25, 8:56 PM
I wish to communicate four points of information to you. I’ll ask ChatGPT to fluff those up into multiple paragraphs of text for me to email.
You will receive that email, recognize its length and immediately copy and paste it into ChatGPT, asking it to summarize the points provided.
Somewhere off in the distance a lake evaporates.
by casey2 on 5/5/25, 12:00 AM
It's the old joke of the teacher who wants students to tried their best and that failure doesn't matter. But when the student follows the process to the best of their ability and fails they are punished while the student who mostly follows the process and then fudges their answer to the correct one is rewarded.
by colbyn on 5/5/25, 9:09 AM
Personally, I’ve been enjoying using ChatGPT to explore different themes of writing. It’s fun. In my case the goal is specifically to produce artifacts of text that’s different from what I’d normally produce.
by bertil on 5/4/25, 9:02 PM
by samyar on 5/5/25, 12:48 PM
I think I is good in two way, one is which you use if as a small helper (basic questions, auto completion...)
Two for getting started on something that you have no idea on ( not to teach you but just give you an idea of what it's and resources to learn more)
by wseqyrku on 5/5/25, 2:23 PM
If you use AI all that is important is your ability to specify the problem, of course, as it always has been, you can just reiterate faster.
by thomasvn on 5/7/25, 5:52 PM
What about using LLMs to refine or sharpen your existing work? Similar to a Rubber ducky? If you're intentional about maintaining and understanding the theory behind the work, I've found it a useful tool.
by SebFender on 5/5/25, 10:52 AM
Simply blaming models is an easy way out and creates little value - Maybe changing the medium and exercise to which it transfers could be a thing?
It's time to get creative.
by A_Stefan on 5/5/25, 7:48 AM
Since there is no "interdiction" to use any LLM, perhaps it should be mandatory to include the prompt as well when used. Feels like that could be the seed that sparks the curiosity..
by polpenn on 5/5/25, 3:09 AM
It helps me spot the bits that feel flat or don’t add much, so I can cut or rework them—while still getting the benefit of the LLM’s idea generation.
by Zebfross on 5/5/25, 2:45 AM
by lqr on 5/5/25, 5:53 AM
I wish there was some way to do the same for programming. Imagine a classroom full of machines with no internet connection, just a compiler and some offline HTML/PDF documentation of languages and libraries.
by psygn89 on 5/4/25, 11:40 PM
by GuB-42 on 5/4/25, 11:12 PM
It feels like we are getting to this weird situation where we just use LLMs as proxies, and the long, boring text is just for LLMs to talk to each other.
For example:
Person A to LLM A: Give me my money.
LLM A to LLM B: Long formal letter.
LLM B to Person B : Give me my money.
Hopefully, nothing is lost in translation.
by Gud on 5/5/25, 7:39 AM
by lgiordano_notte on 5/5/25, 11:21 AM
by jama211 on 5/5/25, 1:38 AM
If your assignment can be easily performed by an LLM, it’s a bad assignment. Teachers are just now finding out the hard way that these assignments always sucked and were always about regurgitating information pointlessly and weren’t helpful tools for learning lol. I did heaps of these assignments before the existence of LLMs, and I can assure you that the busywork was mostly a waste of time back then too.
People using LLMs is just proof they don’t respect your assignment - and you know what, if one person doesn’t respect your assignment, they’re probably wrong. But if 90% of people don’t respect your assignment? Maybe you should consider whether the assignment is the problem. It’s not rocket science.
by agentbrown on 5/4/25, 9:11 PM
1. “When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts. A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise.”
LLMs, having being trained using the corpus of the web, I would argue communicate other human’s thoughts particularly well. Only in exercising an avoidance of plagiarism are the thoughts of other human’s evolved into something closer to “original thought” for the would-be plagarizer. But yes, at least a straight copy/paste retains the same rhetoric as the original human.
2. I’ve seen a few advertisements recently leverage “the prompt” as a means to resonate visual appeal.
i.e a new fast food delivery service starting their add with some upbeat music and a visual presentation of somebody typing into a LLM interface, “Where’s the best sushi around me?” And then cue the advertisement for the product they offer.
by eunos on 5/4/25, 9:27 PM
by palata on 5/4/25, 10:37 PM
The very first time I enjoyed talking to someone in another language, I was 21. Then an exchange student, I had a pleasant and interesting discussion with someone in that foreign language. On the next day, I realised that I wouldn't have been able to do that without that foreign language. I felt totally stupid: I had been getting very good grades in languages for years at school without ever caring about actually learning the language. And now, it was obvious, but all that time was lost; I couldn't go back and do it better.
A few years earlier, I had this great history teacher in high school. Instead of making us learn facts and dates by heart, she wanted us to actually get an general understanding of a historical event. Actually internalise, absorb the information in such a way that we could think and talk about it. And eventually develop our critical thinking. It was confusing at first, because when we asked "what will the exam be about", she wouldn't say "the material in those pages". She'd be like "well, we've been talking about X for 2 months, it will be about that".
Her exams were weird at first: she would give us articles from newspapers and essentially ask what we could say about them. Stuff like "Who said what, and why? And why does this other article disagree with the first one? And who is right?". At first I was confused, and eventually it clicked and I started getting really good at this. Many students got there as well, of course. Some students never understood and hated her: their way was to learn the material by heart and prove it to get a good grade. And I eventually realised this: those students who were not good at this were actually less interesting when they talked about history. They lacked this critical thinking, they couldn't make their own opinion or actually internalise the material. So whatever they would say in this topic was uninteresting: I had been following the same course, I knew which events happened and in which order. With the other students were it "clicked" as well, I could have interesting discussion: "Why do you think this guy did this? Was it in good faith or not? Did he know about that when he did it? etc".
She was one of my best teachers. Not only she got me interested in history (which had never been my thing), but she got me to understand how to think critically, and how important it is to internalise information in order to do that. I forgot a lot of what we studied in her class. I never lost the critical thinking. LLMs cannot replace that.
by boredatoms on 5/4/25, 7:57 PM
by 6510 on 5/5/25, 6:10 AM
by sieve on 5/4/25, 9:49 PM
I like reading and writing stories. Last month, I compared the ability of various LLMs to rewrite Saki's "The Open Window" from a given prompt.[1] The prompt follows the 13-odd attempts. I am pretty sure in this case that you'd rather read the story than the prompt.
I find the disdain that some people have for LLMs and diffusion models to be rather bizarre. They are tools that are democratizing some trades.
Very few people (basically, those who can afford it) write to "communicate original thoughts." They write because they want to get paid. People who can afford to concentrate on the "art" of writing/painting are pretty rare. Most people are doing these things as a profession with deadlines to meet. Unlike you are GRRM, you cannot spend decades on a single book waiting for inspiration to strike. You need to work on it. Also, authors writing crap/gold at a per-page rate is hardly something new.
LLMs are probably the most interesting thing I have encountered since I did the computer. These puritans should get off of their high horse (or down from their ivory tower) and join the plebes.
[1] Variations on a Theme of Saki (https://gist.github.com/s-i-e-v-e/b4d696bfb08488aeb893cce3a4...)
by programjames on 5/4/25, 8:54 PM
by QuadmasterXLII on 5/5/25, 12:40 AM
by dakiol on 5/4/25, 10:10 PM
by xmorse on 5/4/25, 8:09 PM
by jonniebullie on 5/5/25, 12:44 PM
by pmarreck on 5/5/25, 5:03 AM
Perhaps that's good, perhaps that's bad, but it certainly doesn't really allow him to see much of the appeal... yet
by xixixao on 5/5/25, 2:52 PM
by zombiwoof on 5/4/25, 11:42 PM
by sillysaurusx on 5/4/25, 8:14 PM
Forcing people to do these things supposedly results in a better, more competitive society. But does it really? Would you rather have someone on your team who did math because it let them solve problems efficiently, or did math because it’s the trick to get the right answer?
Writing is in a similar boat as math now. We’ll have to decide whether we want to force future generations to write against their will.
I was forced to study history against my will. The tests were awful trivia. I hated history for nearly a decade before rediscovering that I love it.
History doesn’t have much economical value. Math does. Writing does. But is forcing students to do these things the best way to extract that value? Or is it just the tradition we inherited and replicate just because our parents did?
by Pxtl on 5/5/25, 12:58 PM
by halfadot on 5/5/25, 12:31 AM
Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.
by quest88 on 5/4/25, 11:45 PM
by LeroyRaz on 5/6/25, 12:06 AM
by barbazoo on 5/4/25, 8:41 PM
by r0b05 on 5/5/25, 6:19 AM
by unreal37 on 5/4/25, 9:33 PM
by neilwilson on 5/5/25, 4:38 AM
Pithy and succinct takes time.
by revskill on 5/4/25, 8:45 PM
by cryptozeus on 5/4/25, 8:15 PM
by xkcd1963 on 5/5/25, 5:20 AM
by firefoxd on 5/4/25, 8:21 PM
The goal is to make something legible, but the reality is we are producing slop. I'm back to writing before my brain becomes lazy.
by kookamamie on 5/4/25, 7:34 PM
There's too much information in the World for it to matter, I think is the underlying reason.
As an example, most enterprise communication nears the levels of noise in its content.
So, why not let a machine generate this noise, instead?
by palata on 5/4/25, 9:29 PM
Yes, totally. Unfortunately, it takes time and maturity to understand how this is completely wrong, but I feel like most students go through that belief.
Not sure how relevant it is, but it makes me think of two movies with Robin Williams: Dead Poet's Society and Will Hunting. In the former, Robin's character manages to get students interested in stuff instead of "just passing the exams". In the later, I will just quote this part:
> Personally, I don’t give a shit about all that, because you know what? I can’t learn anything from you I can’t read in some fuckin’ book. Unless you wanna talk about you, who you are. And I’m fascinated. I’m in.
I don't give a shit about whether a student can learn the book by heart or not. I want the student to be able to think on their own; I want to be able to have an interesting discussion with them. I want them to think critically. LLMs fundamentally cannot solve that.
by me3meme on 5/5/25, 12:48 PM
by j2d3 on 5/5/25, 2:06 AM
by cadamsdotcom on 5/5/25, 12:17 AM
Exploring a concept-space with LLM as tutor is a brilliant way to educate yourself. Whereas pasting the output verbatim, passing it as one’s own work, is tragic: skipping the only part that matters.
Vibe coding is fun right up to the point it isn’t. (Better models get you further.) But there’s still no substitute for guiding an LLM as it codes for you, incrementally working and layering code, committing to version control along the way, then putting the result through both AI and human peer code reviews.
Yet these all qualify as “using AI”.
We cannot get new language for discussing emerging distinctions soon enough. Without them we only have platitudes like “AI is a powerful tool with both appropriate and inappropriate uses and determining which is which depends on context”.
by TZubiri on 5/4/25, 7:52 PM
by ArthurStacks on 5/5/25, 4:29 AM
by sussmannbaka on 5/5/25, 4:41 AM
by Retr0id on 5/4/25, 8:43 PM
by quijoteuniv on 5/4/25, 8:43 PM
by unraveller on 5/5/25, 2:21 PM
by shaimagz on 5/5/25, 10:47 AM
we agree. mixus makes that easy — across teams, classes, and communities.
by harha_ on 5/5/25, 10:15 AM
by tkgally on 5/5/25, 1:53 AM
That said, I myself am increasingly reading long texts written by LLMs and learning from them. I have been comparing the output of the Deep Research products from various companies, often prompting for topics that I want to understand more deeply for projects I am working on. I have found those reports very helpful for deepening my knowledge and understanding and for enabling me to make better decisions about how to move forward with my projects.
I tested Gemini and ChatGPT on “utilizing Euler angles for rotation representation,” the example topic used by the author in the linked article. I first ran the following metaprompt through Claude:
Please prepare a prompt that I can give to a reasoning LLM that has web search and “deep research” capability. The prompt should be to ask for a report of the type mentioned by the sample “student paper” given at the beginning of the following blog post: https://claytonwramsey.com/blog/prompt/ Your prompt should ask for a tightly written and incisive report with complete and accurate references. When preparing the prompt, also refer to the following discussion about the above blog post on Hacker News: https://news.ycombinator.com/item?id=43888803
I put the the full prompt written by Claude at the end of the Gemini report, which has some LaTex display issues that I couldn’t get it to fix:https://docs.google.com/document/d/1sqpeLY4TWD8L4jDSloeH45AI...
Here is the ChatGPT report:
https://chatgpt.com/share/681816ff-2048-8011-8e0f-d8cbad2520...
I know nothing about this topic, so I cannot evaluate the accuracy or appropriateness of the above reports. But when I have had these two Deep Research models produce similar reports on topics I understand better, they have indeed deepened my understanding and, I hope, made me a bit wiser.
The challenge for higher education is trying to decide when to stick to the traditional methods of teaching—in this case, having the students learn through the process of writing on their own—and when to use these powerful new AI tools to promote learning in other ways.
by hoppp on 5/4/25, 8:35 PM
The kids these days got everything...
by perching_aix on 5/4/25, 11:36 PM
Back in HS literature class, I had to produce countless essays on a number of authors and their works. It never once occurred to me that it was anything BUT an exercise in producing a reasonably well written piece of text, recounting rote-memorized talking points.
Through-and-through, it was an exercise in memorization. You had to recall the fanciful phrases, the countless asinine professional interpretations, brief bios of the people involved, a bit of the historical and cultural context, and even insert a few verses and quotes here and there. You had to make the word count, and structure your writing properly. There was never any platform for sharing our own thoughts per se, which was sometimes acknowledged explicitly, and this was most likely because the writing was on the wall: nobody cared about these authors or their works, much less enjoyed or took interest in anything about them.
I cannot recount a single thought I memorized for these assignments back then. Passed these with flying colors most usually, but even for me, this was just pure and utter misery. Even in hindsight, the sheer notion that this was supposed to make me think about the subject matter at hand borders on laughable. It took astronomical efforts to even retain all the information required - where would I have found the power in me to go above and beyond, and meaningfully evaluate what was being "taught" to me in addition to all this? How would it have mattered (in specifically the context of the class)? Me actually understanding these topics and pondering about them deeply is completely inobservable through essay writing, which was the sole method of grading. If anything, it made me biased against doing so, as it takes a potentially infinite extra time and effort. And since there was approximately no way for our teacher to make me interested in literature either, he had no chance at achieving such lofty goals with me, if he ever actually aimed for them.
On the other side of the desk, he also had literal checklists. Pretty sure that you do too. Is that any environment for an honest exchange of thoughts? Really?
If you want to read people's original thoughts, maybe you should begin with not trying to coerce them into producing some for you on demand. But that runs contrary to the overarching goal here, so really, maybe it's the type of assignment that needs changing. Or the framework around it. But then academia is set in its ways, so really, there's likely nothing you can specifically do. You don't deserve to have to sift through copious amounts of LLM generated submissions; but the task of essay writing does, and you're now the one forced to carry this novel burden.
LLMs caught incumbent pedagogical practices with their pants down, and it's horrifying to see people still being in denial of it, desperately trying to reason and bargain their ways out of it, spurred on by the institutionally ingrained mutual-hostage scenario that is academia. *
* Naturally, I have absolutely zero formal relation to the field of pedagogy (just like the everyday practice of it in academia to my knowledge). This of course doesn't stop me from having an unreasonably self-confident idea on how to achieve what you think essay writing is supposed to achieve though, so if you want a terrible idea or two, do let me know.
by time4tea on 5/5/25, 9:55 AM
(AI slop). If it's not worth writing, it's not worth reading.
Perfect.
by cortesoft on 5/4/25, 7:38 PM
Really? The example used was for a school test. Is there really much original thought in the answer? Do you really want to read the students original thought?
I think the answer is no in this case. The point of the test is to assess whether the student has learned the topic or not. It isn’t meant to share actual creative thoughts.
Of course, using AI to write the answer is contrary to the actual purpose, too, but it isn’t because you want to hear the students creativity, but because it is failing to serve its purpose as a demonstration of knowledge.
by alganet on 5/5/25, 12:05 AM
Relying on that to automatically detect their use makes no sense.
From a teaching perspective, if there is any expectation that artificial intelligence is going to stick, we need better teachers. Ones that can come up with exercises that an artificial intelligence can't solve, but are easy for humans.
But I don't expect that to happen. I expect instead text to become more irrelevant. It already has lost a lot of its relevancy.
Can handwriting save us? Partially. It won't prevent anyone from copying artificial intelligence output, but it will make anyone that does so think about what is being written. Maybe think "do I need to be so verborragic?".
by qustrolabe on 5/4/25, 9:35 PM