from Hacker News

Trying to teach in the age of the AI homework machine

by notarobot123 on 5/26/25, 7:20 PM with 649 comments

  • by math_dandy on 5/26/25, 9:34 PM

    I teach math at a large university (30,000 students) and have also gone “back to the earth”, to pen-and-paper, proctored and exams.

    Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.

  • by sshine on 5/26/25, 11:03 PM

    I teach computer science / programming, and I don't know what a good AI policy is.

    On the one hand, I use AI extensively for my own learning, and it's helping me a lot.

    On the other hand, it gets work done quickly and poorly.

    Students mistake mandatory assignments for something they have to overcome as effortlessly as possible. Once they're past this hurdle, they can mind their own business again. To them, AI is not a tutor, but a homework solver.

    I can't ask them to not use computers.

    I can't ask them to write in a language I made the compiler for that doesn't exist anywhere, since I teach at a (pre-university) level where that kind of skill transfer doesn't reliably occur.

    So far we do project work and oral exams: Project work because it relies on cooperation and the assignment and evaluation is open-ended: There's no singular task description that can be plotted into an LLM. Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.

    But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.

    Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.

    Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.

  • by plantwallshoe on 5/27/25, 3:33 PM

    I’m enrolled in an undergraduate CS program as an experienced (10 year) dev. I find AI incredibly useful as a tutor.

    I usually ask it to grade my homework for me before I turn it in. I usually find I didn’t really understand some topic and the AI highlights this and helps set my understanding straight. Without it I would have just continued on with an incorrect understanding of the topic for 2-3 weeks while I wait for the assignment to be graded. As an adult with a job and a family this is incredibly helpful as I do homework at 10pm and all the office hours slots are in the middle of my workday.

    I do admit though it is tough figuring out the right amount to struggle on my own before I hit the AI help button. Thankfully I have enough experience and maturity to understand that the struggle is the most important part and I try my best to embrace it. Myself at 18 would definitely not have been using AI responsibly.

  • by jumploops on 5/26/25, 10:00 PM

    A bit off-topic, but I think AI has the potential to supercharge learning for the students of the future.

    Similar to Montessori, LLMs can help students who wander off in various directions.

    I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.

    Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious…

    My hope is that, with new teaching methods/styles, we can unlock (or just maintain!) the curiosity inherent in every pupil.

    (If anyone knows of a tool like this, where an LLM stays on a high-level trajectory of e.g. teaching trigonometry, but allows off-shoots/adventures into other topical nodes, I’d love to know about it!)

  • by ghusto on 5/27/25, 12:10 PM

    I've always though that the education system was broken and next to worthless. I've never felt that teachers ever tried to _teach_ me anything, certainly not how to think. In fact I saw most attempts at thought squashed because they didn't fit neatly into the syllabus (and so couldn't be graded).

    The fact that AI can do your homework should tell you how much your homework is worth. Teaching and learning are collaborative exercises.

  • by nkrisc on 5/26/25, 9:17 PM

    If the trend continues, it seems like most college degrees will be completely worthless.

    If students using AI to cheat on homework are graduating with a degree, then it has lost all value as a certificate that the holder has completed some minimum level of education and learning. Institutions that award such degrees will be no different than degree mills of the past.

    I’m just grateful my college degree has the year 2011 on it, for what it’s worth.

  • by yazantapuz on 5/27/25, 11:24 AM

    I teach on a small university. These are some of the measures we take:

    - Hand written midterms and exams.

    - The students should explain how they designed and how they coded their solutions to programming exercises (we have 15-20 students per class, with more students it become more difficult).

    - Presentations of complex topics (after that the rest of the students should comment something, ask some question, anything related to the topic)

    - Presentation of a handwritten one page hand written notes, diagram, mindmap, etc., about the content discussed.

    - Last minute changes to more elaborated programming labs that should be resolved in-class (for example, "the client" changed its mind about some requirement or asked a new feature).

    The real problem is that it is a (lot) more work for the teachers and not everyone is willing to "think outside of the box".

    (edit: format)

  • by johnea on 5/26/25, 10:08 PM

    One of the most offensive words in the anthropomophization of LLMs is: hallucinate.

    It's not only an anthropomorphism, it's also a euphemism.

    A correct interpretation of the word would imply that the LLM has some fantastical vision that it mistakes for reality. What utter bullsh1t.

    Let's just use the correct word for this type of output: wrong.

    When the LLM generates a sequence of words, that may or may not be grammatically correct, but infers a state or conclusion that is not factually correct; lets state what actually happened: the LLM generated text was WRONG.

    It didn't take a trip down Alice's rabbit hole, it just put words together into a stream that inferred a piece of information that was incorrect, it was just WRONG.

    The euphemistic aspect of using this word is a greater offense than the anthropomorphism, because it's painting some cutesy picture of what happened, instead of accurately acknowledging that the s/w generated an incorrect result. It's covering up for the inherent short comings of the tech.

  • by marcus_holmes on 5/27/25, 4:39 AM

    My essay-writing process for my MBA was:

    - decide what I wanted to say about the subject, from the set of opinions I already possess

    - search for enough papers that could support that position. Don't read the papers, just scan the abstracts.

    - write the essay. Scan the reference papers for the specific bit of it that best supported the point I want to make.

    There was zero learning involved in this process. The production of the essay was more about developing journal search skills than absorbing any knowledge about the subject. There are always enough papers to support any given point of view, the trick was finding them.

    I don't see how making this process even more efficient by delegating the entire thing to an LLM is affecting any actual education here.

  • by jamesgill on 5/27/25, 12:41 AM

    The fundamental question that AI raises for me, but nobody seems to answer:

    In our competitive, profit-driven world--what is the value of a human being and having human experiences?

    AI is neither inevitable nor necessary--but it seems like the next inevitable step in reducing the value of a human life to its 'outputs'.

  • by agrippanux on 5/27/25, 3:04 PM

    I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.

    What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.

    This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.

  • by czhu12 on 5/27/25, 3:13 AM

    To offer a flip side of the coin, I can't imagine I would have the patience outside of school, to have learned Rust this past year without AI.

    Having a personal tutor who I can access at all hours of the day, and who can answer off hand questions I have after musing about something in the shower, is an incredible asset.

    At the same time, I can totally believe if I was teleported back to school, it would become a total crutch for me to lean on, if anything just so I don't fall behind the rest of my peers, who are acing all the assignments with AI. It's almost a game theoretic environment where, especially with bell curve scaling, everyone is forced into using AI.

  • by bosuanzi on 5/26/25, 10:17 PM

    Different times have different teaching tasks, which is the sign of human progress.

    Just like after the invention of computers, those methods of how to do manual calculations faster can be eliminated from teaching tasks. Education shifted towards teaching students how to use computational tools effectively. This allowed students to solve more complex problems and work on higher-level concepts that manual calculations couldn't easily address.

    In the era of AI, what teachers need to think about is not to punitively prohibit students from using AI, but to adjust the teaching content to better help students master related subjects faster and better through AI.

  • by owenpalmer on 5/27/25, 12:33 AM

    As an engineering undergrad, I don't think any online work should count toward the student's grade, unless you're allowed to use the Internet however you want to complete it. There simply isn't any other way of structuring the course that doesn't punish the honest students.
  • by snickerbockers on 5/27/25, 7:56 AM

    I think we (as in, the whole species) need to reflect on what the purpose of education is and what it should be, because in theory there's no reason why anybody should pay for a college tuition and then undermine their own mastery of the subject. Obviously 90% of the student body sees it as a ticket to being taken seriously by prospective employers and the other 10% definitely does not deserve to be taken seriously because by prospective employers because they can't even admit an uncomfortable truth about themselves.

    Anyways this isn't actually useful advice because no one person can enact change on a societal scale but I do enjoy standing on this soapbox and telling at people.

    BTW academic success has never been a fair measure of anything, standards and curriculum vary widely between institutions. I spent four years STRUGGLING to get a 3.2 GPA in high school then when I got to undergrad we had to take this "math placement exam" that was just basic algebra and I only had difficulty with one or two problems but I knew several kids with >= 4.0 GPA who had to take remedial algebra because they failed.

    But somehow there's always massive pushback against standardized testing even when they let you take it over and over and over again until you get the grade you wanted (SAT).

  • by TimorousBestie on 5/26/25, 8:49 PM

    I predict that asking students to hand-write assignments is not going to go well. Unfortunately, universities built on the consumer model (author teaches at Arizona State) are incentivized to listen to student feedback over the professor’s good intentions.
  • by esafak on 5/26/25, 9:21 PM

    Let students use AI as they will when learning, but verify without allowing them to use it -- in class -- otherwise you have no way of knowing what they know. Job interviewers face the same problem.
  • by rwyinuse on 5/27/25, 7:24 PM

    I think AI is the perfect final ingredient to ruin the higher education system, which is already in ruins (at least over here in Finland).

    Even before AI, our governments have long wanted more grads to make statistics look good and to suppress wages, but don't want to pay for it. So what you get are more students, lower quality of education, lower standards to make students graduate faster. Thanks to AI, now students don't have to really meet even those low standards to pass the courses. What is left is just a huge waste of young people's time and tax payer's money.

    There are very few degrees I'm going to recommend to my children. Most just don't provide good value for one's time anymore.

  • by rixed on 5/27/25, 3:56 AM

    AI for classical education can be an issue, but AI for inverted classes is perfect.

    Going to school to listen to a teacher for hours and take notes, sitting in a group of peers to whom you are not allowed to speak, and then going home to do some homework on your own, this whole concept is stupid and deserves to die.

    Learning lessons is the activity you should do within the confort of your home, with the help of everything you can including books, AIs, youtube videos or anything that float your boat. Working and practice, on the other hand, are social activities that benefit a lot from interacting with teachers and other students, and deserves to be done collectively at school.

    For inverted classes, AI are no problem at all; at the contrary, they are very helpful.

  • by joering2 on 5/27/25, 3:08 PM

    The AI tools should be helping more than hurting. But take my example: I am in 3 year long litigation with soon to be ex-wife, she recently fired her attorneys and for 2 weeks used chatGPT to write very well worded, very strong and very logically appealing motions practically almost destroying my attorney on multiple occasions and he had to work overtime costing me extra $80,000 in litigation costs. And finally once we got in front of the judge, the ex could not combine two logical sentences together. The paper can defend itself on its face but it also turned out that not a single citation she cited had anything to do with the case at hand, which chatGPT is known for in legal circles. She admit using the tool and only got a verbal reprimand. The judge told majority of that "work" was legal and she cannot stop her from exercising her first amendment right, be it written by AI she had to form questions, edit responses, etc. And I wasn't able to recover a single dime since on its face her motions did make sense, although judge denied majority of her ridiculous pleadings.

    Its really frightening! Its like handling over the smartest brain possible to someone who is dumb, but also giving them very simple GUI that they actually can operate and ask good enough questions/prompts to get smart answers. Once the public at large figure this one out, I can only imagine courts being flooded with all kinds of absurd pleadings. Being the judge in the near future will most likely be the least wanted job.

  • by randcraw on 5/27/25, 9:12 PM

    A good start for this debate would be to reconsider the term "AI", perhaps choosing a term that's more intuitive, like "automation" or "robot assistant". It's obvious that learning to automate a task is no way to learn how to do it yourself. Nor is asking a robot to do it for you.

    Students need to understand that learning to write requires the mastery of multiple distinct cognitive and organizational skills, only the last of which is to generate text that doesn't sound stupid.

    Each of writing's component tasks must be understood and explicitly addressed by the student, to wit: (1) choosing a topic to argue, and the component points to make a narrative, (2) outlining the research questions needed to answer each point, and finally, (3) choosing ONLY the relevant points that are necessary AND sufficient to the argument AND based on referenced facts, and that ONLY THEN can be threaded into a coherent logical narrative exposition that makes the intended argument and that leads to the desired conclusion.

    Only then has the student actually mastered the craft of writing an essay. If they are not held responsible for implementing each and every one of these steps in the final product, they have NOT learned how to write. Their robot did. That essay is a FAIL because the robot has earned the grade; not they. They just came along for the ride, like ballast in a sailing ship.

  • by solresol on 5/26/25, 9:23 PM

    I have been wrestling with this too. I only see two options: no tech university or AI wrangling university.

    https://solresol.substack.com/p/you-can-no-longer-set-an-und...

  • by Animats on 5/27/25, 5:01 AM

    The author is teaching a skill an LLM can do well enough to pass his exams. Is learning English composition in the literary sense now worth what it costs to learn it at a university? That's a very real question now.
  • by mrbonner on 5/26/25, 9:56 PM

    I’m all in for blue book style exams, in person and in a classroom. There are just too much rampant cheating with or without LLM.
  • by blitzar on 5/26/25, 9:39 PM

    How did we solve this when calculators came along and ruined peoples ability to do mental arithmetic and use slide rulers?
  • by sireat on 5/27/25, 6:14 AM

    In my programming, algorithms and data structures courses the homework assignment completion has gone from roughly 50% before LLMs to 99% this year.

    Making assignments harder would be unfair to those few students who would actually try to solve the problem without LLMs.

    So what I do is require extensive comments and ahem - chain of thought reasoning in the comments - especially the WHY part.

    Then I require oral defense of the code.

    Sadly this is unfeasible for some of the large classes of 200, but works quite well when I have the luxury of teaching 20 students.

  • by keiferski on 5/27/25, 12:17 PM

    The best class I took in college was a 3-hour long 5-person discussion group on Metaphysics. It’s a shame that college costs continue to rise, because I still don’t think anything beats small class sizes and active participation.

    Ironically I have used ChatGPT in similar ways to have discussions, but it still isn’t quite the same thing as having real people to bounce ideas off of.

  • by hiAndrewQuinn on 5/27/25, 3:51 AM

    It's not that hard to save remote education accreditation. You just need a test pod.

    Take one of those soundproofed office pods, something like what https://framery.com/en/ sells. Stick a computer in it, and a couple of cameras as well. The OS only lets you open what you want to open on it. Have the AI watch the student in real time, and flag any potential cheating behaviors, like how modern AI video baby monitors watch for unsafe behaviors in the crib.

    If a $2-3000 pod sounds too expensive for you over the course of your child's education, I'm sure remote schoolers can find ways to rent pods at much cheaper scale, like a gym subscription model. If the classes you take are primarily exam-based anyway you might be able to get away with visiting it once a week or less.

    I'm surprised nobody ever brings up this idea. It's obvious you have to fight fire with fire here, unless you want to 10x the workload of any teacher who honestly cares about cheating.

  • by ranger207 on 5/27/25, 2:48 PM

    There's a few comments here about how AI will revolutionize learning because it's personalized or lets users explore or whatever. That's fundamentally missing the point. College students who are using AI aren't using it to learn better, they're using it to learn _less_. The point of writing an essay isn't the essay itself, it's the process of writing the essay: research, organization, writing, etc. The point of doing math problems isn't to get the answer, it's to _do the work_ to find the answer. If you let AI do that, you're not learning better, you're learning worse.

    Now, granted, AI can help with things students are passionate about. If you want to do gamedev you might be able to get an AI to walk you through making a game in Unity or Godot. But societally we've decided that school should be about instilling a wide variety of base knowledge that students may not care about: history, writing, calculus. The idea is that you don't know what you're going to need in your life, and it's best to have a broad foundation so that if you run into something that needs it you'll at least know where to start. 99% of the time developing CRUD apps you're not going to need to know that retrieving an item from an array is O(n), but when some sales manager goes in and adds 2 million items to the storefront and now loading a page takes 12 seconds and you can't remove all that junk because it's for an important sales meeting 30 minutes from now, it's helpful to know that you might be able to replace it with a hashmap that's O(1) instead. AI's fine for learning things you want to learn, but you _need_ to learn more than just what you _want_ to learn. If you passed your Data Structures and Algorithms class by copy/pasting all the homework questions into ChatGPT, are you going to remember what big-O notation even means in 5 years?

  • by foxglacier on 5/27/25, 1:25 AM

    Schools need to re-think what the purpose of essays was in the first place and re-invent homework to suit the existance of LLMs.

    If it's to understand the material, then skip the essay writing part and have them do a traditional test. If it's to be able to write, they probably don't need that skill anymore so skip the essay writing. If it's to get used to researching on their own, find a way to have them do that which doesn't work with LLMs. Maybe very high accuracy is required (a weak point for LLMs), or the output is not an LLM-friendly form, or it's actually difficult to do so the students have to be better than LLMs.

  • by downboots on 5/27/25, 3:43 AM

    The issue is trust, AI is not the issue.

    Culture, not technology.

  • by djoldman on 5/27/25, 12:34 AM

    Basically it comes to this: a sufficiently large proportion of a student's grade must come from work impossible to generate with AI, e.g. in-person testing.

    Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.

    Therefore, professors will probably have to have more frequent in-person tests so that students get immediate feedback that they're gonna fail if they don't actually learn it.

  • by nwlotz on 5/27/25, 5:23 AM

    I've found LLMs to often be a time-suck rather than supercharge my own learning. A huge part of thinking is reconsidering your initial assumptions when you start to struggle in research, mathematical problem solving, programming, whatever it may be. AI makes it really easy to go down a rabbit hole and spend hours filling in details to a question or topic that wasn't quite right to begin with.

    Basically analog thinking is still critical, and schools need to teach it. I have no issues with classrooms bringing back the blue exam books and evaluating learning quality that way.

  • by Aziell on 5/27/25, 3:25 AM

    AI definitely makes it easier for students to finish their assignments, but that’s part of the problem. It’s getting harder to tell whether they actually understand anything.What’s more worrying is how fast they’re losing the habit of thinking for themselves.

    And it’s not just in school. I see the same thing at work. People rely on AI tools so much, they stop checking if they even understand what they’re doing. It’s subtle, but over time, that effort to think just starts to fade.

  • by Footprint0521 on 5/28/25, 1:37 AM

    > An infinitely patient digital tutor that can tackle any question…..You might feel like you are learning when querying a chatbot, but those intellectual gains are often illusory.

    I get the shade here (kind of?) and have seen both sides in my life, but isn’t having a tutor exactly what you need to learn?

    IMO, using it as an information butler is leagues different from a digital tutor. That’s the key— don’t bring forklifts to the gym lol

  • by lisenKaci on 5/27/25, 2:29 PM

    Maybe switching it up could work. What if learning happened at home with the use of AI and "homework" happened in class under supervision?
  • by motohagiography on 5/27/25, 11:25 AM

    if I were teaching english today, i would ask students to write essays taking the positions that an AI is not allowed to. steelman something appalling. stand up in class and debate like your life or grade depends on it and fail anyone who doesn't, and if that excludes people, maybe they don't belong in a university.

    in everything young people actually like, they train, spar, practice, compete, jam, scrimmage, solve, build, etc. the pedagogy needs to adapt and reframing it in these terms will help. calling it homework is the source of a flawed mental model that problematizes the work instead of incentivising it, and now that people have a tool to solve the problem, they're applying their intelligence to the problem.

    arguably there's no there there for the assignments either, especially for a required english credit. the institution itself is a transaction that gets them a ticket to an administrative job. what's the homework assignment going to get them they value? well roundedness, polish, acculturation, insight, sensitivity, taste? these are not valuable or differentiating to kids in elite institutions who know they are competing globally for jobs that are 95% concrete political maneuvering, and most of them (especially in stem) probably think the class signifiers that english classes yield are essentially corrupt anyway.

    maybe it's schadenfreude and an old class chip on my part, but what are they going to do, engage in the discourse and become public intellectuals? argue about rimbaud and voltaire over coffee, cigarettes and jazz? Some of them have higher follower counts than there were readers of the novels or articles being taught in their classes. More people read their tweets every day than have ever read a book by Chiang. AI isn't the problem, it's a forcing function and a solution. Instructors should reflect on what their institutions have really become.

  • by Sam6late on 5/27/25, 7:55 AM

    In Roman times, teaching focused on wrestling to prepare young people for life. Now, in the AI age, what to teach, and why, have once again become major questions, especially when AI can pass the bar exams and a Ph.D. is no longer a significant achievement. Critical thinking, and life experiences could be the target but would they do it?
  • by danhodgins on 5/27/25, 8:17 PM

    Fight fire with fire.

    Use AI to determine potential essay topics that are as close to 'AI-proof' as possible.

    Here is an example prompt:

    "Describe examples of possible high school essay topics where students cannot use AI engines such as perplexity or ChatGPT to help complete the assignment. In other words - AI-proof topics, assignments or projects"

  • by andoando on 5/27/25, 12:23 AM

    Perhaps we should reconsider the purpose of teaching. If one does not want to learn, why are we teaching them?
  • by bugtodiffer on 5/27/25, 7:57 AM

    Maybe just stop giving homework and instead give the kids some time to live. Fixed it for you.
  • by dsign on 5/27/25, 8:45 PM

    Caveat, I'm just armchair-commenting and I haven't thought much about this.

    After kids learn to read and do arithmetic, shouldn't we go back to apprenticeships? The system of standardized teaching and grading seems to be about to collapse, and what's the point of memorizing things when you can carry all that knowledge in your pocket? And, anyway, it doesn't stick until you have to use it for something. Plus, a teacher seems to be insufficient to control all the students in a classroom (but that's nothing new; it amazes me that I was able to learn anything at all in elementary school, with all the mayhem there always was in the classroom).

    Okay, I can already see a lot of downsides to this, starting with the fact that I would be an illiterate farmer if some in my family had had a say in my education. But maybe the aggregate outcome would be better than what is coming?

  • by fallinditch on 5/26/25, 9:43 PM

    > I want my students to write unassisted because I don’t want to live in a society where people can’t compose a coherent sentence without a bot in the mix.

    Kicking against the pricks.

    It is understandable that professional educators are struggling with the AI paradigm shift, because it really is disrupting their profession.

    But this new reality is also an opportunity to rethink and improve the practice of education.

    Take the author comment above: you can't disagree with the sentiment but a more nuanced take is that AI tools can also help people to be better communicators, speakers, writers. (I don't think we've seen the killer apps for this yet but I'm sure we will soon).

    If you want students to be good at spelling and grammar then do a quick spelling test at the start of each lesson and practice essay writing during school time with no access to computers. (Also, bring back Dictation?)

    Long term: yes I believe we're going to see an effect on people's cognition abilities as AI becomes increasingly integrated into our lives. This is something we as a society should grapple with and develop new enlightened policies and teaching methods.

    You can't put the genie back in the bottle, so adapt, use AI tools wisely, think deeply about ways to improve education in this new era.

  • by BrtByte on 5/27/25, 7:29 AM

    I'm curious to see how the paper-and-pen pivot goes. There's something radical about going analog again in a world that's hurtling toward frictionless everything
  • by kenjackson on 5/26/25, 11:52 PM

    The idea with calculators was that as a tool there are higher level questions that calculators would help you answer. A simple example is that calculators don't solve word problems, but you can use them to do the intermediate computations.

    What are the higher level questions that LLMs will help with, but for which humans are absolutely necessary? The concern I have is that this line doesn't exist -- and at the very best it is very fuzzy.

    Ironically, this higher level task for humans might be ensuring that the AIs aren't trying to get us (whatever that means, genocide, slavery, etc...).

  • by overgard on 5/26/25, 9:55 PM

    I think as a culture we've fetishized formal schooling way past its value. I mean, how much of what you "learned" in school do you actually use or remember? I'm not against education, education is very important, but I'm not sure that schooling is really the optimal route to being educated. They're related, but they're not the same.

    The reality is, if someone wants to learn something then there's very little need to cheat, and if they don't want to learn the thing but they're required to, the cheating sort of doesn't matter in the end because they won't retain or use it.

    Or to put it simpler, you can lead a horse to water but..

  • by acc_297 on 5/27/25, 7:17 PM

    One of the last courses I took during my CS degree we had one on one 10 minute zoom calls with TAs who would ask a series of random detailed questions about any line of code in any file of our term project. It was easy to complete if you wrote the code by hand and I imagine would have been difficult for students who extensively cheated.

    In terms of creative writing I think we need to accept that any proper assessment will require a short essay to be written in person. Especially at the high school level there's no reason why a 12th grade student should be passing english class if they can't write something half-decent in 90 minutes. And it doesn't need to be pen and paper - I'm sure there are ways to lock a chromebook into some kind of notepad software that lacks writing assistance.

    Education should not be thought of as solely a pathway to employment it's about making sure people are competent enough to interface with most of society and to participate in our broader culture. It's literally an exercise in enlightenment - we want students to have original insights about history, culture, science, and art. It is crucial to produce people who are pleasant to be around and who are interesting to talk to - otherwise what's the point?

  • by perdomon on 5/27/25, 5:23 PM

    It's honestly encouraging to see an educator thinking about solutions instead of wagging a finger at LLMs and technology and this new generation. Homework in its current form cannot exist AND be beneficial for the students -- educators need to evolve with the technology to work alongside it. The Google Docs idea was smart, but the return to pen and paper in the classroom is great. Good typists will hate it at first, but transcribing ideas more slowly and semi-permanently has its benefits.
  • by nateburke on 5/28/25, 12:57 AM

    Between widespread social media/short form video addiction and GPT for all homework starting in middle school, I think ASI is nearly guaranteed by virtue of the human birth/death process, with no further model improvement required.
  • by intended on 5/27/25, 12:49 PM

    I am kinda shocked that the thing which would be shared on HN, unironically, is an essay of the attraction to the idea of the butlerian Jihad. Interesting times.
  • by throwaway81523 on 5/27/25, 1:52 AM

    No mention of Danny Dunn. Tsk.

    https://www.semicolonblog.com/?p=32946

  • by enceladus06 on 5/27/25, 5:56 AM

    LLMs is is here to stay and will change learning for the better (we will be full-scale disrupted 3-5yr from now in EDU), it is a self-guided tutor like never before and 100% Amazing, except for when it hallucinates.

    I use it [Copilot / GPT / Khanmingo] all the time to figure out new tools and prototype workflows, check code for errors, and learn new stuff including those classes at universities which cost way too much.

    If universities feel threatened by AI cry me a river.

    No professor or TA was *EVER* able to explain calculus and differential equations to me, but Khanmingo and ChatGPT can. So the educational establishment can deal with this.

  • by BrenBarn on 5/27/25, 5:36 AM

    > I think there is a good case to be made for trying to restrict AI use among young people the way we try to restrict smoking, alcohol, gambling, and sex.

    I would go further than that, along two axes: it's not just AI and it's not just young people.

    An increasing proportion of our economy is following a drug dealer playbook: give people a free intro, get them hooked, then attach your siphon and begin extracting their money. The subscription-model-ization of everything is an obvious example. Another is the "blitzscaling" model of offering unsustainably low prices to drive out competition and/or get people used to using something that they would never use if they had to pay the true cost. More generally, a lot of companies are more focused on hiding costs (environmental, psychological, privacy, etc.) from their customers than on actually improving their products.

    Alcohol, gambling, and sex, are things that we more or less trust adults to do sensibly and in moderation. Many people can handle that, and there are modest guardrails in place even so (e.g., rules that prevent selling alcohol to drunk people, rules that limit gambling to certain places). I would put many social media and other tech offerings more in the category of dangerous chemicals or prescription drugs or opiates (like the laudanum the article mentions). This would restrict their use, yes, but the more important part is to restrict their production and set high standards for the companies that engage in such businesses.

    Basically, you shouldn't be able to show someone --- child or adult --- an infinite scrolling video feed, or give them a GPT-style chatbot, or offer free same-day shipping, without getting some kind of permit. Those things are addictive and should be regulated like drugs.

    And the penalties for failing to do everything absolutely squeaky clean should be ruinous. The article mentions one of Facebook's AIs showing CSAM to kids. One misstep on something like that should be the end of the company, with multi-year jail terms for the executives and the venture capitalists who funded the operation. Every wealthy person investing in these kinds of things should live in constant fear that something will go wrong and they will wind up penniless in prison.

  • by wiihack on 5/27/25, 7:14 AM

    As others have already mentioned, I believe that it's mainly the curious and engaged students who will benefit greatly from AI. And for those who cheat or use AI to deceive and end up failing a written exam, well, maybe that's not such a bad thing after all...
  • by timnetworks on 5/27/25, 11:28 PM

    Stop giving boring ass essay assignments. Forest, trees.
  • by sudoaptinstall on 5/27/25, 10:49 AM

    Let me just say that I always like these types of conversation on here. Tech dorks and education are an interesting conversation. I'll throw in my 2 cents as a HS CS teacher.

    First off, I respect the author of the article for trying pen and paper, but that’s just not an option at a lot of places. The learning management systems are often tied in through auto grading with google classroom or something similar. Often you’ll need to create digital versions of everything to put in management systems like Atlas. There’s also school policy to consider and that’s a whole nother can of worms. All that aside though.

    The main thing that most people don't have in the forefront of their mind in this conversation is the fact that most students (or adults) don't want to learn. Most people don't want to change. Most students will do anything and everything in their power to avoid those two things. I’ve often thought about why, maybe to truly learn you need to ignore your ego and accept that there’s something you don’t know; maybe it’s a biological thing and humans are averse to spending calories on mental processes that they don’t see as a future benefit – who knows.

    This problem runs core to all of modern education (and probably has since the idea of mandatory mass education was called from the pits of hell a few hundred years ago). LLMs have really just brought us a society to a place where it can no longer be ignored because students no longer have the need to do what they see as busy work. Sadly, they don’t inherently understand how writing essays on oppressed children hiding in attics more than half a century ago helps them in their modern tiktok filled lives.

    The other issue is that, for example, in the schools I’ve worked at, since the advent of LLMs, many teachers and most of the admin all take this bright and cheery approach to LLMs. They say things like, “The students need to be shown how to do it right,” or “help the students learn from ChatGPT.” The fact that the vast majority of students in high school just don’t care escapes them. They feel like it’s on the teachers to wield and to help the students wield this mighty new weapon in education. But in reality, It’s just the same war we’ve always had between predator and prey (or guard and prisoner) but I fear in this one, only one side will win. The students will learn how to use chat better and the teachers will have nothing to defend against it, so they will all throw up their hand as start using chat to grade thing. Before you know it, the entire education system is just chat grading work submitted by chat under the guise of, “oh but the student turned it in so it’s theirs.”

    The only thing LLMs have done, and more than likely ever do, in education is to make it blatantly obvious that students are not empty vessels yearning for a drink from the fountain of knowledge that can only be provided to them by the high and mighty educational institution. Those students do exist and they will always find a way to learn. I also assume that many of us here fall into that, but those of us that do are not the majority.

    My students already complain about the garbage chat created assignments their teachers are giving them. Entire chunks of my current school are using chat to create tests, exams, curriculum, emails and all other forms of “teacher work”. Several teachers, who are smart enough, are already using chat to grade thing. The CEO of the school is pushing for every grade (1-12) having 2 AI classes a week where they are taught how to “properly” use LLMs. It’s like watching a train wreck in slow motion.

    The only way to maintain mandatory mass education is by accepting no one cares, finding a way to remove LLMs from the mix, or switch of Waldorf, homeschooling or some other better system than mandatory mass education. The wealthy will be able to, the rest will suffer.

  • by goodluckchuck on 5/27/25, 12:42 PM

    1
  • by globalnode on 5/27/25, 3:23 AM

    well worth the read just for the term "broligarch"
  • by datahack on 5/27/25, 1:48 AM

    There is a tremendous lack of understandings between the genx and millennial teachers and the way they see and use AI, and how younger people are using it.

    Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.

    This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).

    The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.

    The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?

    The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.

    Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).

    If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.

    Why are we still grading students on formulaic outputs when AI can do that faster?

    The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.

    That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.

    Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.

    Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.

    The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.

    It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.

    Meanwhile, students are already living in the future, where AI is as foundational as electricity.

    The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.

    Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.

    Edit: sorry for so many edits. Many typos.