Monday, April 10, 2023

Complexity and Explainable AI

About six years ago, I posted on why it was important to understand machine learning, mentioning trust, fairness, security and causality. But I then I brought in complexity.

What if P = NP? Would that help. Actually it would makes things worse. If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for say matching or traveling salesman but you would have no clue why that circuit works. 

Ryan Williams countered

If P=NP you could also find the shortest proof in your favorite formal system that the smallest possible circuit does what you wanted it to do, as well as any other claim you are wondering that may be true about the circuit. That proof might not be comprehensible to you, but it could be written in a format where proof assistant software such as HOL or Coq could parse it and convince you it is correct. So if P=NP (with feasible low constants) I think that would definitely help.

So if P = NP maybe you couldn't understand how the circuit works but any question about how it works you could answer. 

While we don't live in a world where P = NP, we are in a world where we can produce very strong learning algorithms where we can't understand the programs they produce. They are now powerful enough to apply Ryan's idea and use the algorithms themselves to understand the decisions they make.

For example I tried using Google Translate on a Hungarian obituary of Vera Sós. Hungarian does not use gendered pronouns and and the translation incorrectly produced male pronouns. I then fed the translation into ChatGPT which translated with the correct female pronouns. 

You can speculate why Google Translate uses male pronouns but you can't ask it why. But you can ask ChatGPT.

I used female pronouns in the translation because Vera T. Sós is a female name. Vera is a feminine given name, and since the context of the original text is about her life and accomplishments, it is appropriate to use female pronouns in the English translation to maintain clarity and accurately convey information about her.

So perhaps if you want to understand how ML works, perhaps we should seek stronger algorithms, not weaker ones, algorithms that can explain themselves. As they say, a dull knife is more dangerous than a sharp one.

5 comments:

  1. So, stronger should imply with a gurantee to be explainable?

    ReplyDelete
    Replies
    1. Guarantee is a strong word, but yes with the capability to explain its actions.

      Delete
  2. Commemoration is held on Thursday: https://www.renyi.hu/hu/esemenyek-v1/megemlekezes-sos-verarol

    ReplyDelete
  3. How do you know that ChatGPT's "explanation" is really that, rather than merely more statistical language parroting?

    ReplyDelete
    Replies
    1. There is some circular reasoning here and the best you could hope for is an overly simplified view of its reasoning. When I have been asking ChatGPT to explain its actions, it does seem to be making a good effort at it.

      Delete