tag:blogger.com,1999:blog-3722233.post8862914568867887724..comments2021-11-26T15:18:15.242-06:00Comments on Computational Complexity: A Complexity View of Machine Learning?Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3722233.post-55608603001928265962021-11-10T22:41:04.890-06:002021-11-10T22:41:04.890-06:00Have you taken a look at any of the work around th...Have you taken a look at any of the work around the relative expressive efficiency of shallow vs deep ConvNets (ie. https://arxiv.org/abs/1705.02302)? Something about your post reminded me of these results, but not sure about the broader structural insihts here.Rahul Mehtahttps://www.blogger.com/profile/08358387698036669439noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-31925678877999535422021-11-07T08:20:25.740-06:002021-11-07T08:20:25.740-06:00EG: "the same thing can be said about how to ...EG: "the same thing can be said about how to use ML algorithms to perform symbolic integration."<br /><br />So that means that one question for complexity theory would be to define the class of problems for which ML makes no sense. As an off-hand first guess, perhaps it would be the class of problems for which domain causality is important.<br /><br />As a jaundiced observer I'd say something along the lines of "all problems for which you care that the answer is correct"...<br /><br />Of course, that latter definition would put ML outside the domain of computational complexity.<br />DJLhttps://www.blogger.com/profile/04036156397398405817noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-59260905576889794832021-11-05T21:08:15.040-05:002021-11-05T21:08:15.040-05:00@Lance:
The analogy here is interesting ...
&qu...@Lance:<br /><br />The analogy here is interesting ... <br /><br />"machine learning fails miserably if you try to use it to break cryptography."<br /><br />Fundamentally this seems like a flawed approach, <br />unless there's a niche area within the process <br />where ML could be useful. <br /><br />That said, the same thing can be said about how to use ML algorithms to perform symbolic integration. Having<br />worked in this area while developing programming languages,<br />there's been some fanfare on how great some ML<br />algos work...<br />Perhaps this is a story to be told another time.EGnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-12117686382440495542021-11-04T14:57:07.499-05:002021-11-04T14:57:07.499-05:00This is a very interesting discussion! My collabor...This is a very interesting discussion! My collaborators at Google Quantum AI and I also had this idea a year ago when we are studying quantum advantage in machine learning. We try to formalize some of the concepts in https://arxiv.org/abs/2011.01938 by defining a complexity class for problems that could be solved by "algorithms that could learn from data". The definition is probably not the best one, but it enables us to study some basic implications.<br /><br />We then explore some applications for solving quantum many-body problems in https://arxiv.org/abs/2106.12627 to show that classical learning algorithms with experimental data obtained from nature could solve some interesting and challenging quantum physics problems.Hsin-Yuan Huanghttps://hsinyuan-huang.github.io/noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-79722817505659892362021-11-03T20:26:22.158-05:002021-11-03T20:26:22.158-05:00Re: Anon #1 comment:
ML is not just "nonadap...Re: Anon #1 comment:<br /><br />ML is not just "nonadaptive" self-reducibility, it needs to be oblivious self-reducibility -- the points on which you're given the right label should be independent of the input itself (but could possibly depend on the input length).<br /><br />Thus a worst-case to average-case reduction wouldn't qualify as "ML" because the points on which the (worst-case) algorithm queries the (average-case) oracle could (and usually, do) depend on the input being decided.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-71065071162642752232021-11-03T13:02:19.969-05:002021-11-03T13:02:19.969-05:00A reasonable way to view "ML oracles" is...A reasonable way to view "ML oracles" is through the lens of non-uniform computations.<br /><br />We may think of an ML model as "advice" with the additional requirement that it can be computed from (z, L(z)) pairs where L is the language we care about and z's are chosen from some sampleable distribution (with additional constraints on |z|, the number of z's, and the time to compute the model from the "training set" {(z, L(z))}. <br /><br />In some sense, notions like auto-reducibility, random self-reducibility, etc. already address the question of what it means for a language to be "learnable". We can think of ML as providing a nonadaptive self-reducibility of sorts.Anonymousnoreply@blogger.com