Thursday, September 26, 2024

LeetCode and AI

I often do LeetCode problems for fun. This site mainly provides short coding problems for students and others to train for the kinds of question that come up in technical job interviews. I use the site to keep up my programming skills and it often requires clever algorithms and data structures. The Daily Question is like a Wordle for recreational programmers. Try this problem which asks you to create a data structure for sets with insert, delete and get-random-element operations in expected constant amortized time.

I have to turn off GitHub Co-pilot, otherwise it will give you the solution before you even finish typing the function name. There are so many solutions to these problems out there and in the training sets for LLMs.

A student asked me last week why he should do LeetCode problems if AI can solve them all. I responded that doing the problems (and CS homework more importantly) give you the skills to understand code and algorithms and in your future jobs you'll encounter problems AI may not solve fully, or correctly, or efficiently and having those skills will allow you to solve those kinds of problems AI alone can't tackle.

But is this the right answer as AI continues to improve? Ideally we want to create students who transcend AI instead of being replaced by it. For that they need to fully understand programming and computing and be smart enough to know when and when not to outsource that skill to AI. That's the challenge for CS departments: teaching students how to use AI to make themselves better computer scientists without relying on it. It's a hard balance in a technology where we can't predict AI's capabilities at the time these students graduate.

2 comments:

  1. We teach kids how to do arithmetic even though calculators and computers can do them very well. We teach them basics of science even though most won't actually use them in their lives for professional reasons. In a sense, we need to know the vocabulary and basics ourselves to help guide the machines and understand what they produce. If AGI happens and we don't even need to understand machines, then it is different stage in human-machine interface and all bets are off

    ReplyDelete
  2. Critical thinking.

    We have to be able to tell when a result is wrong, or an algorithm is flawed. To do so requires knowing how the sausage is made, even if we’re not very good at the making.

    To do otherwise is to give our decision making over to some other entity, never questioning the methods or results, and live with the consequences. AI then becomes political, with little in the way of facts or analysis, and each of us voting for our favourite flavour.

    ReplyDelete