Thursday, September 29, 2022

Machine Learning and Complexity

 

Schloss Dagstuhl by Monet by Dall-E

At Dagstuhl earlier this month, I hung out for a little bit with the participants of the other seminar, Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century. Knowledge graphs are what you would expect them to be, nodes are objects like "Berlin" and "Germany" with directed edges with labels like "capital". Think of having knowledge graphs of hundreds of millions of nodes and how that could help answer queries about the world. These secondary workshops are shorter and focus on creating a new vision, in this case how to maximize the importance of knowledge graphs in an increasing ML-focused world.

Perhaps we need such a visioning seminar for complexity. While we often get lost in the mathematical questions and techniques in our field, computational complexity is designed to understand the difficulty of solving various problems. Machine learning and advances in optimization should be changing that conversation. If you imagine a world where P = NP (and I did exactly that in chapter 2 of my 2013 book) much of what you consider is starting to happen anyway. ML does fail to break cryptography but then again, isn't this the best of all possible worlds? 

Look at what Scott Aaronson said back in 2006.

If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. 

If I can be a Monet, can Mozart be far behind? ML trading by some hedge funds are beating Warren Buffett but remember if everyone trades perfectly, no one beats the average. Gauss is going to be trickier but it's coming. There's a reason Scott is spending a year at OpenAI to understand "what, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want".

5 comments:

  1. I think computational complexity and theory at large should be thinking of the implications of ML, though I am not sure the question of implications of P=NP is the central one. You are right that "automating creativity" doesn't have the same ring to it, but there are many other ones. I think several of the applications in my chapter https://introtcs.org/public/lec_14_PvsNP.html are still relevant. Also, while beating mere humans might not be a big deal, imagine the ability to find a small model that is as good as text completion as GPT-3 (or GPT-4,5) or proving that it doesn't exist...

    So I think the P vs NP question will still be here to stay, but it may well be that many other parts of the field will change. It is a bit like what the discovery of computers did to math. Some questions that people were interested in (solving various specific equations) became trivial, many new questions became important. Whole new subfields were created, in particular our own.

    Boaz Barak

    ReplyDelete
  2. Dall-E must have envisioned Monet visiting Dagstuhl after some really awful flooding!

    ReplyDelete
  3. Given how little we can currently control the results of ML, getting ML to do only what we want it to do, or getting ML to provide explanations for its decisions feels a bit like getting ML to find proofs related to its own behavior. In TCS/Math we typically think of coming up with proofs as a creative process, and that feels like a lot of what's behind Scott's "automating creativity" phrase. So far, this is the kind of thing that seems outside of the wheelhouse for ML methods.

    (BTW: Systems for taking images and rendering them automatically in some artistic style was something that showed up in SIGGRAPH in the 1990s. They didn't have the connection to text of course.)

    ReplyDelete
  4. I don't hear any of the current AI blokes (other than the critics) saying "humans build models of the world in their heads and do causal reasoning on those models. We need to figure out how to do that." Quite the contrary, the game in current AI is to _avoid_ doing that work, and simply pray that statistical computation will somehow magically create things that act as though they were doing that.

    In particular, humans really do build really good models of the world and do seriously amazing causal reasoning on those models. Another way of putting it is to point out that the singularity _already happened_ and it is us.

    But that's something noone wants to hear. Go figure. I still find human intelligence amazing, and GPT-3 and the like fundamentally boring (since it isn't even trying to actually reason, it's trying to pretend to reason). YMMV, but the field looks completely committed to being at the intellectual level of parlor tricks (i.e. appearing to do X without actually doing X; this really is high-tech parlor trickery.)

    ReplyDelete
  5. I don't think Scott's quote makes sense. How many people can actually recognize that Warren Buffett has a good investment strategy? Of course, "buy companies based on intrinsic value" sounds good, but understanding that Coca-Cola, for example, is a good investment is much more difficult. And there's not that many people presenting investment strategies, so if one actually could recognize which were good investment strategies, one would be set. Similarly, how many people can actually follow a step-by-step math argument? You should know from teaching that students have trouble understanding the proofs of theorems when they are taught.

    ReplyDelete