Monday, January 19, 2026

What to do about students using ChatGPT to do their homework?

Students are using ChatGPT to do their HW. Here are things I've heard and some of my thoughts on the issue (Lance also added some comments). I have no strong opinions on the issue. Some of what I say here applies to any AI or, for that matter, old-fashioned cheating by having your friend do the homework for you or by going to the web for the answer (Is ChatGPT  going to the web for the answer but with a much better search tool?)

1) Ban the use of ChatGPT.  That might be impossible. 

2) Allow them to use ChatGPT but they must take what it outputs and put it in their own words. This should work in an advanced course where the students want to be there and are mature, but might be problematic in, say, Freshman calculus. This is also a problem if you want to see if they can come up with the answer on their own. Lance: How do you know if the words come from AI or the student?

3) Assign problems that ChatGPT does not do well on. I can do this in Ramsey Theorem and (to some extent) in Honors Discrete Math but (a) over time it will be harder, and (b) this may be hard to do in a standard course like calculus. Lance: Pretty much impossible for most undergrad courses. Bill: Slight disagreement- computer science is a fast-moving field where recent papers can be used to get problem sets before ChatGPT has digested the paper. Sometimes. 

4) For some assignments have them use ChatGPT and hand in both what chatGPT output and the student's rewriting of it. If ChatGPT made a mistake they should also report that. Lance: Many would just have AI rewrite the AI. Bill: Alas true.

5) Revise the curriculum so that the course is about HOW to use ChatGPT intelligently including cases where it is wrong and types of problems where it is wrong (do we know that?). Lance: And that changes as models change. Bill: True but not necessarily  bad. 

6) Make sure that a large percent of the grade is based on in-class exams.  I don't like this but it may become needed. Lance: I just grade homeworks for completion instead of correctness so that students didn't feel they needed to use AI to keep up with their classmates. Bill: I like the idea but they may still use AI just because they are lazy. 

7) The in-class exams will have variants of the homework so if a student used ChatGPT this will test if they actually learned the material (I do this anyway and warn the students on day one that I will do this to discourage cheating of any kind.) Lance: This works until the students wear AI glasses. Bill: I can't tell if you are kidding. More seriously, how far off is that? Lance: Not far.

8) Abolish Grades. Lance thinks yes and blogged about it here. Abstractly I wish students were mature enough to not need grades as a motivation. But even putting that aside, we do need some way to tell if students are good enough for a job or for grad school. Lance: We do need that, but grades aren't playing that role well anymore. Bill: What do do about this is a topic for another blog post. 

9) In a small class have the students do the homework by meeting with them and having them tell you the answer, and you can ask follow up questions. Perhaps have them do this in groups. Lance: Panos Ipeirotis used AI to automate oral questioning. Bill: Looks great, though I'll wait till they work out the bugs.

So---readers- what have been your experiences? 

22 comments:

  1. It is a fascinating idea to use AI to conduct the exam. Possibly that is the future. I would require the students to be on site to eliminate cheating. Maybe each university will have small AI rooms where you can conduct such exams, but outside exam period use them to learn with the help of AI.

    Regarding homeworks, in Hungary they never counted significantly towards your grade, because cheating and dishonesty are more widespread. I'm considering that from next semester they won't count at all to the grades, but students would be allowed to submit them optionally to get feedback, with the requirement that the first line should read: "Dear Professor, I humbly beg you to be so kind as to give feedback on my work."

    ReplyDelete
  2. I found this a pretty good read:

    https://ploum.net/2026-01-19-exam-with-chatbots.html

    The dude teaches "Open Source Strategies" at a French university and came up with some interesting rules for his exams.

    ReplyDelete
  3. I am with Lance, grades no longer mean anything and we should figure out how to abolish them as soon possible. I teach algorithms at a public R1 university. In undergrad classes, AI cheating is so rampant such that I just see no coherent way to assign homework. (Side note: things are quite dire, it seems that students have completely lost the ability to struggle with hard problems without help. I find it deeply depressing.) On the other hand, nobody completes homework if it is optional, and exams demonstrated that very few students absorbed the material from lectures alone. I do not have the resources to do oral testing, but it is a good option for those who do. I have tried group problem sessions graded on attendance, but these did not work great, one hour a week just isn't enough time to put serious thought into problems.

    Ideally we quickly agree that grades/diplomas mean nothing, that the burden of screening if a graduate is a good fit for job/grad school falls on the employer/potential advisor, and that the only purpose of grades is for students to get feedback on their performance if they want to improve. Then there is no reason to take a class unless one truly wants to learn the material, and we can return to using homework as a forcing function for learning.

    ReplyDelete
  4. I think (4) is currently the best solution personally. I'm a new professor, and I'm trying this out this semester. I also made problem set grades completion-based (but with grade-like feedback) to try to encourage students to try them on their own to practice for the exams.

    AI glasses probably will be easy to monitor in the short-term at least, and I think longer-term there may be bigger fish to fry. Possibly larger changes to classrooms and teaching styles (?). I'm not entirely sure.

    ReplyDelete
  5. My own colleagues don't agree if using AI is cheating or not. Many don't think it is. Everyone uses AI. We professors use them. A lot. The students use them. A lot. The use is rampant. There is no solution that doesn't accept the existence of AI. I, for one, don't grade homework anymore. It is useless. I'm just grading AI. And, if I try to enforce a strict ”no AI” policy, I would fail *everyone*, because *everyone* uses it. The first generation of students that used AI used it with a bit of shame. Now, it is completely normal. They use it in front of you. There is no shame at all. It is part of life. I have no solution for this. Actually, I don't know even if there is a problem. It is so wide spread that I don't know if there is anything to be done but Lance's no grades and that is it.

    ReplyDelete
    Replies
    1. As I've said before, in-class closed-book closed-device tests for 90% of the grade worked fine in the days when students were copying the smart kids homework, and will continue to work fine when the kids look up the answers on the net (whether by Google or AI).

      There's absolutely no reason to give a passing grade to someone who can't answer basic questions and do straightforward problems about the current level material. If a student can't do that, they won't be able to understand the questions in next level class. Let alone function in the real world after they graduate.

      Delete
    2. Indeed, in-class closed-book closed-device tests work fine. The grade difference between it and homework assignments is abysmal and very discouraging thou. In fact, it makes me think that maybe the in-class tests are outdated. Maybe in this new world, students (and people in general) shouldn't be subject to tests without an AI companion. Everyone uses AI outside anyway.

      Delete
    3. "Everyone uses AI outside anyway." I don't use it because I want what I do to be correct. Most of my work colleagues don't use it either. If they do use it, they use it for simple tasks where they can check the answer.

      Delete
  6. I'm confused at this being a seemingly "deep" question. Schools have taught basic arithmetic for decades while pocket calculators as so cheap they are given away as free gifts. What has qualitatively changed since then?

    ReplyDelete
    Replies
    1. They are not comparable. With AI, the student enters the assignment in the system and submits the output for grading. There are even webrowser plugins that makes this faster. In less than a minute, without even reading a single word, the student submits the work. You don't need to read the questions and you don't need to read the answers. And for any topic at all.

      Delete
  7. I suggest you teach assuming the students are there to learn. Students have been cheating long before LLMs existed. If practical, discourage students from cheating. But, don't let the cheaters interfere with teaching the non-cheaters.

    ReplyDelete
  8. Give them an extra credit research project that they will get an A in the class for, but is unlikely to get far in an LLM.

    ReplyDelete
  9. Exams weighings have creeped up and are now 80-84% of the final letter grade. If they GPT all the homeworks, they find themselves unprepared for the exams, which they won't have any internet access on. I find this a better strategy than trying to catch everyone who GPT the homeworks like they did in high school.

    Students are getting both better and worse, its bimodal and growing apart. There are a lot more 100s, but also a lot more 10s and 20s, when the material hasn't changed much.

    wrt 7, Toronto already has a big problem of companies who offer cheating services, where they give students a secret camera and ear piece and tell them the exam answers, For example, https://www.reddit.com/r/UofT/comments/1jkv6vz/do_not_cheat_especially_with_spy_tech_at_your/

    and

    https://hive.utsc.utoronto.ca/public/dean/news%20&%20initiatives/Mitigating_Coordainted_Cheating_Exams_May2023.pdf

    ReplyDelete
    Replies
    1. wrt 7, in Budapest this used to be an issue at some unis 20 years ago. Then the company went out of business, and on their website they complained that they had to shut down, because many students stole their equipment, and they were shocked to see such dishonesty... :)

      Delete
  10. One on-topic and one tangential comment; in both cases, the overarching idea is to embrace and integrate AI as much as possible in teaching.

    (1) Re-think Homework: The instructor writes detailed prompts for an AI to serve as an interactive tutor for students -- brainstorm key definitions, concepts, clever ideas, etc. and configure the AI to act as an interactive homework-based tutor. Students solve the problems that AI offers, typically starting with the easiest ones first, going on to harder ones, using an AI as a thinking companion and also interactive verifiers. This kind of interactive validation of work is impossible with human TAs and instructors, and can help students get past super-basic misunderstanding of definitions and concepts (all too common in CS theory, but not exclusive to it). "Base prompts" can be widely shared across various slices - all curricula, all CS curricula, all TCS curricula, all Operating systems curricula, etc., and can be adjusted by individuals as needed. The goal is to help students learn. Whether they learn or not is entirely up to them. Universities can and should get out of the evaluation and credentialing business -- leave it to downstream entities (employers, grad-school admissions officers, and so on).

    (2) The fact that (at least for now) different "levels of AI" exist (driven primarily by "thinking budgets") and are accessible with different levels of subscriptions has the danger to lead to severe inequality in learning outcomes. Universities should negotiate commitments from the foundational model companies to donate their best models with significant thinking budgets free (or at low negotiated costs) to all students. (Hopefully, we don't end up in rich universities negotiating these deals for their students while poorer public universities lagging behind -- that would lead to another form of inequality.). National governments should get involved in this very seriously, else the kind of AI accessible to students in Nigeria and Norway might differ dramatically; it's a great moment for us to start eliminating global inequality in access to knowledge, perhaps our best bet toward future human prosperity. (I did say I was going a bit tangential here, but I think this is really important.)

    ReplyDelete
    Replies
    1. I'm confused that people who are professors/instructors think they can't give grades anymore just because students have gotten better at not doing their homework.

      The company I work for gives a couple of tests to people who are applying for a job. If they don't score high enough, we don't even consider them. If we do consider them, we talk to them to figure out if they are good. If they seem to be good, we make them an offer. This isn't a perfect process, but we usually seem to get this approximately right (of course, I only get more data on the ones we hire).

      If we can figure out whether we want to hire someone, someone in a university should be able to figure out whether the student deserves an A, B, C, D, or F in a course.

      Delete
    2. The difference is that in school, the instructor needs to assess and label EVERY student. When hiring, a company only needs to validate ONE or a FEW are qualified to fill the role.

      Delete
  11. Put stuff in the homework so if they put it into chatgpt, chatgpt just says bad words and shows them pictures of cow udders, that'll show those morons!

    ReplyDelete
  12. For me, there are two significant issues with any strategy that allows or incorporates AI into teaching.

    The first is that learning is most effective through effort: struggling with a hard problem and finding many bad or incorrect solutions before finding the correct one. Even if a student does a good job of understanding an AI solution, or fixing the flaws with an AI solution, they are not going to have as good of an understanding.

    The second is that AI is (currently) not *that* good. Yes, it's good enough to get an A on most undergrad assignments, and yes it always gives answers that sound thorough and thought out (though under scrutiny, they rarely are). But it does not do a good job of setting up context, or building up concepts towards the main idea. I don't know if I've ever seen AI give an explanation with content as good as an average textbook, or even a decent off-the-cuff human response. At worst, it's simply wrong.

    I think students who use AI are fooling themselves. They read an AI-generated cliff notes version of the topic, then read over the AI-generated solution and think "makes sense to me!" It is not sufficient to reach the same standard as students who use traditional methods.

    Which is why in my opinion, while it's obviously impossible to stop AI use, it's our responsibility to discourage it. Yes, AI has its uses; but the classroom isn't one of them, at least in its current form. We should be upfront with students about that. And in this context, I think it's usually a bad idea to try to incorporate AI into course materials. We should tell students that they shouldn't use AI in the course, tell them why they shouldn't, and explain why this is entirely consistent with AI being useful in real-world applications.

    ReplyDelete
  13. Can't you ask ai to help generate the questions? Or is that cheating too? Then you could have them turn in more hand written assignments, just to get the practice of writing the answers even if they didn't get them themselves. But maybe that's only for advanced classes.

    ReplyDelete
  14. In the AI age you should rethink why you do things and adjust.

    I give homeworks to students to have them practice and learn. I cannot force them to learn. I can motivate them by saying if you do the assignments you will do will in exams cause the questions will be similar.

    I didn't take m exams to check who is smarter. I tend exams to see if they have learned.

    I have been allowing my students to talk and work on assignments together and get help with them from anywhere for the past 20 years, as long as they are honest and acknowledge them in the submission. I want to set the incentives so that honesty is not penalized.

    ChatGPT have had no impact on my evaluation of students, my system still works pretty well.

    I also try to make students don't work too much about grades but have focus on learning.

    You might think that I give A and B to everyone but interestingly my class average for my undergrad courses tend to still be ~ 67 and with a reasonable distribution.

    If you focus on how you can evaluate students and spend time on designing it well, things can work, but it needs you to really spend time on these stuff, not just do the least amount of work to be able to give grades.

    I see that I have a duty to create a positive and useful experience for my students, and they generally appreciate that.

    Lance says he wishes students were more mature, in my experience of we treat them as mature and with respect, they act as mature people, if we treat them as kids, they act like kids.

    ReplyDelete
  15. **** is hitting the fan, literally!
    https://www.nature.com/articles/d41586-025-04064-7 "the data were permanently lost and could not be recovered"

    AI charlatans have turned the tables on us in manifesting humans who are willingly devolving their ability to "think" (the phenomenon that we incorrectly claim as our sole inviolable asset).

    We love to make fun of the Dodo because this avian species gave up its flying abilities and evolved complete defenselessness against predators; humans of course are no different from other living beings, we eagerly grasp at whatever shortcut is made available to us exploiting whatever we can get our hands on.

    Que sera sera, no one cares.

    ReplyDelete