tag:blogger.com,1999:blog-3722233.post6265927712720407040..comments2019-10-16T07:45:04.418-04:00Comments on Computational Complexity: Why is Multiplication Hard?Lance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-3722233.post-83235335532804353372017-04-15T16:43:12.378-04:002017-04-15T16:43:12.378-04:00Lance, how do we know Google isn't using a mul...Lance, how do we know Google isn't using a multiplication function as part of their AI implementation for spreadsheets -- "cheating." For instance, an AI that can select from known operators such as addition, multiplication, division, etc would almost be expected from an AI algorithm intended for practical use on spreadsheets.<br /><br />What do you think Lance?Austin Leehttps://www.blogger.com/profile/13090713132472918623noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-37858273502463463992014-11-01T09:24:44.832-04:002014-11-01T09:24:44.832-04:00I was wrong about machine learning multiplication....I <a href="https://docs.google.com/spreadsheets/d/1DaTr1HDLYTbXrjB1l8nbkq_ZWoHI6pM-QpmuA4hOLnE/edit?usp=sharing" rel="nofollow">was wrong</a> about machine learning multiplication.Lance Fortnowhttps://www.blogger.com/profile/06752030912874378610noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-15512495953015809712014-02-14T14:02:17.171-05:002014-02-14T14:02:17.171-05:00Dear Professor,
I would like to share a comment f...Dear Professor,<br /><br />I would like to share a comment for the searching of one-way function. I hope you like it!!!!<br /><br />https://plus.google.com/112389322263940335368/posts/cz5yba1obXkFrank Vegahttps://www.blogger.com/profile/15257641208909866601noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-29669066500226360752013-08-07T15:08:18.169-04:002013-08-07T15:08:18.169-04:00Does genetic programming (or "more generally&...Does genetic programming (or "more generally", symbolic regression) not count as a ML technique?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-3397604303528626242013-08-05T10:39:24.462-04:002013-08-05T10:39:24.462-04:00Burton requests: "It would be nice if [JAS] c...<b>Burton</b> requests: "It would be nice if [JAS] could summarize his points more succinctly"<br />----------------<br />Thanks! That's fun!<br /><br /><b>Proposition</b> Naturality & concision: <b><a href="http://math.stackexchange.com/a/24447" rel="nofollow">choose one</a>.</b><br /><br /><b>Corollary 1</b> Connectomes <b><a href="http://www.ncbi.nlm.nih.gov/pubmed/23874168" rel="nofollow">can't be simple</a></b>.<br /><br /><b>Corollary 2</b> Top-down AI must fail.<br />John Sidleshttps://www.blogger.com/profile/16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-41927603188633193142013-08-05T10:34:16.265-04:002013-08-05T10:34:16.265-04:00Yes, machine learning is computers gradually learn...Yes, machine learning is computers gradually learning to do things people can do, in particular things that involve pattern recognition and categorization.<br /><br />Remember, machine learning used to be called "artificial intelligence".<br />Lex Spoonhttps://www.blogger.com/profile/13859632965228608649noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-36978346864393196732013-08-04T22:50:31.085-04:002013-08-04T22:50:31.085-04:00Wow, I admire John's long ... long winded resp...Wow, I admire John's long ... long winded responses. While detailed, it would be nice if he could summarize his points more succinctly. It's really amazing how long on average these responses are. Burtonnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-50551517828104787072013-08-04T19:37:23.917-04:002013-08-04T19:37:23.917-04:00Lance observes "When you look at what machin...<b>Lance</b> observes "When you look at what machine-learning seems to do moderately well […] these are things that humans with a reasonable amount of training, can do very well. […] Is this some philosophical argument that our brain works like machine-learning algorithms?"<br />------------------------<br />Perhaps a more accurate phrase than "philosophical argument" would be "heuristic evidence", in light of Ed Wilson's thesis (which gravely angered many academic philosophers):<br />------------------------<br /><a href="http://www.naturalism.org/OffSite_Stored_Pages/WQ-WILSON.htm" rel="nofollow"><b>Wilson's Thesis</b></a> "Much of the history of philosophy can be fairly said to consist of failed models of the brain"<br />------------------------<br />Is it conversely true, that transformational advances in philosophy (or mathematics? or computer science?) commonly arise from new models of cognition? <br /><br />Not very many philosophers (or mathematicians, or computer scientists) would embrace <i>that</i> thesis! And yet it is remarkably easy to conceive arguments in favor of it. E.g, modern category theory and algebraic geometry (etc.) can be appreciated as arising from a cognitive reconsolidation that was initiated in the 1950s by Eilerberg, Mac Lane, Grothendieck, Serre, etc. <br /><br />Perhaps one broad theme of Lance's post (as I read it) is that 21st century computer science is no different. That would be fun! (and good news for young researchers too) :)John Sidleshttps://www.blogger.com/profile/16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-46455678406187158542013-08-04T04:02:55.921-04:002013-08-04T04:02:55.921-04:00Actually, you can learn it: http://jmlr.org/procee...Actually, you can learn it: http://jmlr.org/proceedings/papers/v28/livni13.pdf (you can apply the same technique for regression)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-3636318440309909942013-08-02T17:34:40.055-04:002013-08-02T17:34:40.055-04:00Actually, you can produce emails with ML technique...Actually, you can produce emails with ML techniques (Bayes nets) that are used for filtering spam. Those emails will look for a bayes filter as if it was no spam, but they will still not make sense.Martin Thomahttps://www.blogger.com/profile/13629221699214248094noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-76328111525212134352013-08-02T17:24:55.973-04:002013-08-02T17:24:55.973-04:00I'll echo the others: why should factoring be ...I'll echo the others: why should factoring be learnable if multiplication is?<br /><br />Besides, I'm not convinced that multiplication is not learnable. Represent things as bits, and break the problem down into a series of problems that involve learning the ith bit of the output. Learning to predict the low-order bit of the output to be trivial. Moreover, I would expect the same to hold for arbitrary bits of the output (at least in the PAC sense) given enough samples. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-38987307970756117822013-08-02T13:11:01.781-04:002013-08-02T13:11:01.781-04:00As long as you express numbers as their prime fact...As long as you express numbers as their prime factors multiplication and factorization are both extremely easy. [2,3,3,3,9] x [7,7,7] = [2,3,3,3,7,7,7,9]<br /><br />But now addition is very hard.Paulnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-7112929309729585582013-08-02T06:14:28.660-04:002013-08-02T06:14:28.660-04:00I believe a computer can learn how to multiply if ...I believe a computer can learn how to multiply if you program it to look at the data the way humans do. That is, a human looking at these triples of numbers wouldn't just "train a neural net" on them, he would apply a finite list of techniques he's familiar with. Even if these techniques, for the sake of argument, don't include multiplication explicitly, they really should include taking logarithms because so many important dependencies linearize after that, and discovering and training a linear dependency is easy. Interestingly, this dependency also linearizes. :) And computers would still be better than humans in taking logs of large numbers and discovering the linear dependency after that, of course.<br /><br />Factoring fails this test -- there is no simple technique, at least we don't know one.Sergey Nikolenkohttps://www.blogger.com/profile/07934793923573977379noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-60176801664391481512013-08-02T03:03:20.191-04:002013-08-02T03:03:20.191-04:00In ML the goal is to learn an efficient algorithm ...In ML the goal is to learn an efficient algorithm (it is not modelled like inductive inference where usually any recursive function is acceptable, in addition the learning algorithm also has to be efficient). Suppose it is a subset of P. Now if factoring is hard it cannot be learned by ML. However, it seems perfectly likely that the learned algorihm can outperform humans and even if a human could simulate any polynomial algorihm in polynomial time, running it on a computer can of course be much faster so that it really makes a difference in practice (consider e.g. reactive systems with a limited response time).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-42335306618121026912013-08-02T01:58:57.590-04:002013-08-02T01:58:57.590-04:00The graphs of these problems are close and TC0. Bu...The graphs of these problems are close and TC0. But I don't understand how learning the graph of factoring would help with factoring. The right problem to learn is existence of a nontrivial factor below a given value. That would help with factoring. But then it is not clear why learning this problem should be easy if learning multiplication is easy. <br /><br />There are spelling errors in the post.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-79256967855096187232013-08-01T22:03:57.794-04:002013-08-01T22:03:57.794-04:00"Unless you ... was some sort of savant you i..."Unless you ... was some sort of savant you it would take you" -- fix grammar<br /> <br />The last two digits of 230589 * 481621 are flippedAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-19071083589058342492013-08-01T19:06:44.071-04:002013-08-01T19:06:44.071-04:00Lance asks "Is this some philosophical argum...<b>Lance</b> asks "Is this some philosophical argument that our brain works like machine learning algorithms?"<br />-----------------<br />Modern neuroscience teaches that memory functions not as a fixed record of objective facts, but rather as a mutable learning structure. E.g. Kroes and Fernández assert in <b><i><a href="http://www.ncbi.nlm.nih.gov/pubmed/22874579" rel="nofollow">Dynamic neural systems enable adaptive, flexible memories</a></i></b>:<br />-----------------<br />"Almost all studies on memory formation have implicitly put forward a rather static view on memory. However, memories are not stable but sensitive to changes over time. ... [such that] abstract knowledge can come to guide behavior in novel situations that only share partial overlap with episodic experiences that have given rise to the formation of abstract knowledge [and] abstract knowledge can function as pre-existing schemas to the encoding of novel memories [and in consequence] apparent memory alterations and distortions are adaptive."<br />-----------------<br /><br /><b>Summary</b> Any scientific article that includes a discussion of the cognitive implications of Bill Murray's film <i>Groundhog Day</i> is bound to be interesting!<br /><br /><b>Query</b> Does "true AI" await the development of machines that sleep, dream, and confabulate?John Sidleshttps://www.blogger.com/profile/16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-88824561052014551252013-08-01T18:33:52.404-04:002013-08-01T18:33:52.404-04:00@Anonymous #2
He doesn't seem to be saying th...@Anonymous #2<br /><br />He doesn't seem to be saying that ML isn't worth anything because it can only get good at things people can get good at. It's more that he's wondering if there is a mathematical reason why computers and humans are bad at the same things.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-51126626942824702332013-08-01T16:23:04.884-04:002013-08-01T16:23:04.884-04:00I don't understand what point you are trying t...I don't understand what point you are trying to make at all. I agree with those above me that your assertion that "if computers could learn to multiply, then they could learn to factor" needs an argument. <br /><br />I'm also not sure what that has to do with your next assertion, that machine learning algorithms can only do things that people can also do. This second assertion doesn't seem controversial to me, nor does it seem to limit the promise of machine learning -- getting computers to be good at the same things people are good at is exactly the goal of artificial intelligence. <br /><br />Could you clarify what you mean to say?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-42992946639213508932013-08-01T14:33:28.425-04:002013-08-01T14:33:28.425-04:00"For if it could, then we should be able to u..."For if it could, then we should be able to use a similar algorithm to figure out how to factor numbers"<br /><br />I don't understand - why should that be the case?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-40034044456112311902013-08-01T14:10:38.825-04:002013-08-01T14:10:38.825-04:00I don't get it. For most ML algorithms, you gi...I don't get it. For most ML algorithms, you give a training set that separates input variables from the output variable.<br /><br />We don't ask spam filters to compose emails for us. We ask them to tell us if an email is spam or not.<br /><br />If we train an good ML algorithm to multiply it could conceivably do that. (Even very simple ML algorithms can learn to add two numbers.) Why would we expect it to be able to factor?<br /><br />Pat Morinhttps://www.blogger.com/profile/06328185728688465505noreply@blogger.com