Students are using ChatGPT to do their HW. Here are things I've heard and some of my thoughts on the issue (Lance also added some comments). I have no strong opinions on the issue. Some of what I say here applies to any AI or, for that matter, old-fashioned cheating by having your friend do the homework for you or by going to the web for the answer (Is ChatGPT going to the web for the answer but with a much better search tool?)
1) Ban the use of ChatGPT. That might be impossible.
2) Allow them to use ChatGPT but they must take what it outputs and put it in their own words. This should work in an advanced course where the students want to be there and are mature, but might be problematic in, say, Freshman calculus. This is also a problem if you want to see if they can come up with the answer on their own. Lance: How do you know if the words come from AI or the student?
3) Assign problems that ChatGPT does not do well on. I can do this in Ramsey Theorem and (to some extent) in Honors Discrete Math but (a) over time it will be harder, and (b) this may be hard to do in a standard course like calculus. Lance: Pretty much impossible for most undergrad courses. Bill: Slight disagreement- computer science is a fast-moving field where recent papers can be used to get problem sets before ChatGPT has digested the paper. Sometimes.
4) For some assignments have them use ChatGPT and hand in both what chatGPT output and the student's rewriting of it. If ChatGPT made a mistake they should also report that. Lance: Many would just have AI rewrite the AI. Bill: Alas true.
5) Revise the curriculum so that the course is about HOW to use ChatGPT intelligently including cases where it is wrong and types of problems where it is wrong (do we know that?). Lance: And that changes as models change. Bill: True but not necessarily bad.
6) Make sure that a large percent of the grade is based on in-class exams. I don't like this but it may become needed. Lance: I just grade homeworks for completion instead of correctness so that students didn't feel they needed to use AI to keep up with their classmates. Bill: I like the idea but they may still use AI just because they are lazy.
7) The in-class exams will have variants of the homework so if a student used ChatGPT this will test if they actually learned the material (I do this anyway and warn the students on day one that I will do this to discourage cheating of any kind.) Lance: This works until the students wear AI glasses. Bill: I can't tell if you are kidding. More seriously, how far off is that? Lance: Not far.
8) Abolish Grades. Lance thinks yes and blogged about it here. Abstractly I wish students were mature enough to not need grades as a motivation. But even putting that aside, we do need some way to tell if students are good enough for a job or for grad school. Lance: We do need that, but grades aren't playing that role well anymore. Bill: What do do about this is a topic for another blog post.
9) In a small class have the students do the homework by meeting with them and having them tell you the answer, and you can ask follow up questions. Perhaps have them do this in groups. Lance: Panos Ipeirotis used AI to automate oral questioning. Bill: Looks great, though I'll wait till they work out the bugs.
So---readers- what have been your experiences?
It is a fascinating idea to use AI to conduct the exam. Possibly that is the future. I would require the students to be on site to eliminate cheating. Maybe each university will have small AI rooms where you can conduct such exams, but outside exam period use them to learn with the help of AI.
ReplyDeleteRegarding homeworks, in Hungary they never counted significantly towards your grade, because cheating and dishonesty are more widespread. I'm considering that from next semester they won't count at all to the grades, but students would be allowed to submit them optionally to get feedback, with the requirement that the first line should read: "Dear Professor, I humbly beg you to be so kind as to give feedback on my work."
I found this a pretty good read:
ReplyDeletehttps://ploum.net/2026-01-19-exam-with-chatbots.html
The dude teaches "Open Source Strategies" at a French university and came up with some interesting rules for his exams.
I am with Lance, grades no longer mean anything and we should figure out how to abolish them as soon possible. I teach algorithms at a public R1 university. In undergrad classes, AI cheating is so rampant such that I just see no coherent way to assign homework. (Side note: things are quite dire, it seems that students have completely lost the ability to struggle with hard problems without help. I find it deeply depressing.) On the other hand, nobody completes homework if it is optional, and exams demonstrated that very few students absorbed the material from lectures alone. I do not have the resources to do oral testing, but it is a good option for those who do. I have tried group problem sessions graded on attendance, but these did not work great, one hour a week just isn't enough time to put serious thought into problems.
ReplyDeleteIdeally we quickly agree that grades/diplomas mean nothing, that the burden of screening if a graduate is a good fit for job/grad school falls on the employer/potential advisor, and that the only purpose of grades is for students to get feedback on their performance if they want to improve. Then there is no reason to take a class unless one truly wants to learn the material, and we can return to using homework as a forcing function for learning.
I think (4) is currently the best solution personally. I'm a new professor, and I'm trying this out this semester. I also made problem set grades completion-based (but with grade-like feedback) to try to encourage students to try them on their own to practice for the exams.
ReplyDeleteAI glasses probably will be easy to monitor in the short-term at least, and I think longer-term there may be bigger fish to fry. Possibly larger changes to classrooms and teaching styles (?). I'm not entirely sure.
My own colleagues don't agree if using AI is cheating or not. Many don't think it is. Everyone uses AI. We professors use them. A lot. The students use them. A lot. The use is rampant. There is no solution that doesn't accept the existence of AI. I, for one, don't grade homework anymore. It is useless. I'm just grading AI. And, if I try to enforce a strict ”no AI” policy, I would fail *everyone*, because *everyone* uses it. The first generation of students that used AI used it with a bit of shame. Now, it is completely normal. They use it in front of you. There is no shame at all. It is part of life. I have no solution for this. Actually, I don't know even if there is a problem. It is so wide spread that I don't know if there is anything to be done but Lance's no grades and that is it.
ReplyDeleteI'm confused at this being a seemingly "deep" question. Schools have taught basic arithmetic for decades while pocket calculators as so cheap they are given away as free gifts. What has qualitatively changed since then?
ReplyDeleteThey are not comparable. With AI, the student enters the assignment in the system and submits the output for grading. There are even webrowser plugins that makes this faster. In less than a minute, without even reading a single word, the student submits the work. You don't need to read the questions and you don't need to read the answers. And for any topic at all.
DeleteI suggest you teach assuming the students are there to learn. Students have been cheating long before LLMs existed. If practical, discourage students from cheating. But, don't let the cheaters interfere with teaching the non-cheaters.
ReplyDeleteGive them an extra credit research project that they will get an A in the class for, but is unlikely to get far in an LLM.
ReplyDeleteExams weighings have creeped up and are now 80-84% of the final letter grade. If they GPT all the homeworks, they find themselves unprepared for the exams, which they won't have any internet access on. I find this a better strategy than trying to catch everyone who GPT the homeworks like they did in high school.
ReplyDeleteStudents are getting both better and worse, its bimodal and growing apart. There are a lot more 100s, but also a lot more 10s and 20s, when the material hasn't changed much.
wrt 7, Toronto already has a big problem of companies who offer cheating services, where they give students a secret camera and ear piece and tell them the exam answers, For example, https://www.reddit.com/r/UofT/comments/1jkv6vz/do_not_cheat_especially_with_spy_tech_at_your/
and
https://hive.utsc.utoronto.ca/public/dean/news%20&%20initiatives/Mitigating_Coordainted_Cheating_Exams_May2023.pdf
One on-topic and one tangential comment; in both cases, the overarching idea is to embrace and integrate AI as much as possible in teaching.
ReplyDelete(1) Re-think Homework: The instructor writes detailed prompts for an AI to serve as an interactive tutor for students -- brainstorm key definitions, concepts, clever ideas, etc. and configure the AI to act as an interactive homework-based tutor. Students solve the problems that AI offers, typically starting with the easiest ones first, going on to harder ones, using an AI as a thinking companion and also interactive verifiers. This kind of interactive validation of work is impossible with human TAs and instructors, and can help students get past super-basic misunderstanding of definitions and concepts (all too common in CS theory, but not exclusive to it). "Base prompts" can be widely shared across various slices - all curricula, all CS curricula, all TCS curricula, all Operating systems curricula, etc., and can be adjusted by individuals as needed. The goal is to help students learn. Whether they learn or not is entirely up to them. Universities can and should get out of the evaluation and credentialing business -- leave it to downstream entities (employers, grad-school admissions officers, and so on).
(2) The fact that (at least for now) different "levels of AI" exist (driven primarily by "thinking budgets") and are accessible with different levels of subscriptions has the danger to lead to severe inequality in learning outcomes. Universities should negotiate commitments from the foundational model companies to donate their best models with significant thinking budgets free (or at low negotiated costs) to all students. (Hopefully, we don't end up in rich universities negotiating these deals for their students while poorer public universities lagging behind -- that would lead to another form of inequality.). National governments should get involved in this very seriously, else the kind of AI accessible to students in Nigeria and Norway might differ dramatically; it's a great moment for us to start eliminating global inequality in access to knowledge, perhaps our best bet toward future human prosperity. (I did say I was going a bit tangential here, but I think this is really important.)