Thursday, September 26, 2024

LeetCode and AI

I often do LeetCode problems for fun. This site mainly provides short coding problems for students and others to train for the kinds of question that come up in technical job interviews. I use the site to keep up my programming skills and it often requires clever algorithms and data structures. The Daily Question is like a Wordle for recreational programmers. Try this problem which asks you to create a data structure for sets with insert, delete and get-random-element operations in expected constant amortized time.

I have to turn off GitHub Co-pilot, otherwise it will give you the solution before you even finish typing the function name. There are so many solutions to these problems out there and in the training sets for LLMs.

A student asked me last week why he should do LeetCode problems if AI can solve them all. I responded that doing the problems (and CS homework more importantly) give you the skills to understand code and algorithms and in your future jobs you'll encounter problems AI may not solve fully, or correctly, or efficiently and having those skills will allow you to solve those kinds of problems AI alone can't tackle.

But is this the right answer as AI continues to improve? Ideally we want to create students who transcend AI instead of being replaced by it. For that they need to fully understand programming and computing and be smart enough to know when and when not to outsource that skill to AI. That's the challenge for CS departments: teaching students how to use AI to make themselves better computer scientists without relying on it. It's a hard balance in a technology where we can't predict AI's capabilities at the time these students graduate.

7 comments:

  1. We teach kids how to do arithmetic even though calculators and computers can do them very well. We teach them basics of science even though most won't actually use them in their lives for professional reasons. In a sense, we need to know the vocabulary and basics ourselves to help guide the machines and understand what they produce. If AGI happens and we don't even need to understand machines, then it is different stage in human-machine interface and all bets are off

    ReplyDelete
  2. Critical thinking.

    We have to be able to tell when a result is wrong, or an algorithm is flawed. To do so requires knowing how the sausage is made, even if we’re not very good at the making.

    To do otherwise is to give our decision making over to some other entity, never questioning the methods or results, and live with the consequences. AI then becomes political, with little in the way of facts or analysis, and each of us voting for our favourite flavour.

    ReplyDelete
  3. While you are talking about your spare time ... with Comiskey Park so close to your office are the White Sox are still your team"as they were 20 years ago? Any thoughts on their recent record breaking form?

    ReplyDelete
  4. Strictly speaking, the recommended problem does not ask for "expected constant amortized time" but "average O(1) complexity" per operation. It may not be easy to guess over what the average is supposed to be taken. This makes the problem challenging.

    ReplyDelete
  5. "There are so many solutions to these problems out there and in the training sets for LLMs."

    I think that gets to the key point: To what extent are the solutions out there actually solutions to actual upcoming programming problems, and not just interview test problems.

    I have a couple of programming projects in mind, and it's inconceivable that an LLM could be of use: stuff for advanced study of Japanese and Japanese literature.

    Gary Marcus claims that while a lot of programmers like using an LLM in programming, the actual academic studies are showing a less rosy picture: LLMs are more use to new programmers and use of LLMs is associated with more bugs and security problems.

    https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer

    ReplyDelete
    Replies
    1. This is Gary Marcus being Gary Marcus, always looking for the negative. How would he explain this study of productivity improvements of programmers using LLMs.

      Delete