Monday, April 17, 2017

Understanding Machine Learning

Today Georgia Tech had the launch event for our new Machine Learning Center. A panel discussion talked about different challenges in machine learning across the whole university but one common theme emerged: Many machine learning algorithms seem to work very well but we don't know why. If you look at a neural net (basically a weighted circuit of threshold gates) trained for say voice recognition, it's very hard to understand why it makes the choices it makes. Obfuscation at its finest.

Why should we care? A few reasons:

  • Trust: How do we know that the neural net is acting correctly? Beyond checking input/output pairs we can't do any other analysis. Different applications have a different level of trust. It's okay if Netflix makes a bad movie recommendation, but if a self-driving car makes a mistake...
  • Fairness: Many examples abound of algorithms trained on data will learn intended or unintended biases in that data. If you don't understand the program how do figure out the biases?
  • Security: If you use machine learning to monitor systems for security, you won't know what exploits still might exist, especially if your adversary is being adaptive. If you can understand the code you could spot and fix security leaks. Of course if the adversary had the code, they might find exploits. 
  • Cause and Effect: Right now at best you can check that a machine learning algorithm only correlates with the kind of output you desire. Understanding the code might help us understan the causality in the data, leading to better science and medicine. 
What if P = NP? Would that help. Actually it would makes things worse. If you had a quick algorithm for NP-complete problems, you could use it to find the smallest possible circuit for say matching or traveling salesman but you would have no clue why that circuit works. 

Sometimes I feel we put to much pressure on the machines. When we deal with humans, for example when we hire people, we have to trust them, assume they are fair, play by the rules without at all understanding their internal thinking mechanisms. And we're a long way from figuring out cause and effect in people.

2 comments:

  1. If P=NP you could also find the shortest proof in your favorite formal system that the smallest possible circuit does what you wanted it to do, as well as any other claim you are wondering that may be true about the circuit. That proof might not be comprehensible to you, but it could be written in a format where proof assistant software such as HOL or Coq could parse it and convince you it is correct. So if P=NP (with feasible low constants) I think that would definitely help.

    ReplyDelete
  2. To your list of reasons to care, I would add another: that the problem of understanding which learning tasks are feasible for these methods is a fascinating and important one from a purely intellectual point of view.

    ReplyDelete