Thursday, August 01, 2013

Why is Multiplication Hard?

Quick. What is 879544 * 528045? Unless you used a calculator or was some sort of savant you it would take you a couple of minutes to figure out a solution. Of course a computer can calculate this very quickly.

But what a computer can't do easily is learn how to multiply. If we feed in triples of numbers, (879544,582045,464438811480),  (541535,711245,385164061075), (230589,481621,111056504796), ..., into any machine learning algorithm it's doubtful the algorithm could take a new pair (666750,313009) and produce its product 208698750750. For if it could, then we should be able to use a similar algorithm to figure out how to factor numbers, which we believe a computationally difficult talk.

When you look at what machine learning seems to do moderately well: spam detection, face recognition, language translation, voice-to-text and self-driving cars, these are things that humans with a reasonable amount of training, can do very well.

Is this some philosophical argument that our brain works like machine learning algorithms? Think of it more as an observation.

21 comments:

  1. I don't get it. For most ML algorithms, you give a training set that separates input variables from the output variable.

    We don't ask spam filters to compose emails for us. We ask them to tell us if an email is spam or not.

    If we train an good ML algorithm to multiply it could conceivably do that. (Even very simple ML algorithms can learn to add two numbers.) Why would we expect it to be able to factor?

    ReplyDelete
    Replies
    1. Actually, you can produce emails with ML techniques (Bayes nets) that are used for filtering spam. Those emails will look for a bayes filter as if it was no spam, but they will still not make sense.

      Delete
  2. "For if it could, then we should be able to use a similar algorithm to figure out how to factor numbers"

    I don't understand - why should that be the case?

    ReplyDelete
  3. I don't understand what point you are trying to make at all. I agree with those above me that your assertion that "if computers could learn to multiply, then they could learn to factor" needs an argument.

    I'm also not sure what that has to do with your next assertion, that machine learning algorithms can only do things that people can also do. This second assertion doesn't seem controversial to me, nor does it seem to limit the promise of machine learning -- getting computers to be good at the same things people are good at is exactly the goal of artificial intelligence.

    Could you clarify what you mean to say?

    ReplyDelete
  4. @Anonymous #2

    He doesn't seem to be saying that ML isn't worth anything because it can only get good at things people can get good at. It's more that he's wondering if there is a mathematical reason why computers and humans are bad at the same things.

    ReplyDelete
  5. Lance asks  "Is this some philosophical argument that our brain works like machine learning algorithms?"
    -----------------
    Modern neuroscience teaches that memory functions not as a fixed record of objective facts, but rather as a mutable learning structure. E.g. Kroes and Fernández assert in Dynamic neural systems enable adaptive, flexible memories:
    -----------------
    "Almost all studies on memory formation have implicitly put forward a rather static view on memory. However, memories are not stable but sensitive to changes over time. ... [such that] abstract knowledge can come to guide behavior in novel situations that only share partial overlap with episodic experiences that have given rise to the formation of abstract knowledge [and] abstract knowledge can function as pre-existing schemas to the encoding of novel memories [and in consequence] apparent memory alterations and distortions are adaptive."
    -----------------

    Summary  Any scientific article that includes a discussion of the cognitive implications of Bill Murray's film Groundhog Day is bound to be interesting!

    Query  Does "true AI" await the development of machines that sleep, dream, and confabulate?

    ReplyDelete
  6. "Unless you ... was some sort of savant you it would take you" -- fix grammar

    The last two digits of 230589 * 481621 are flipped

    ReplyDelete
  7. The graphs of these problems are close and TC0. But I don't understand how learning the graph of factoring would help with factoring. The right problem to learn is existence of a nontrivial factor below a given value. That would help with factoring. But then it is not clear why learning this problem should be easy if learning multiplication is easy.

    There are spelling errors in the post.

    ReplyDelete
  8. In ML the goal is to learn an efficient algorithm (it is not modelled like inductive inference where usually any recursive function is acceptable, in addition the learning algorithm also has to be efficient). Suppose it is a subset of P. Now if factoring is hard it cannot be learned by ML. However, it seems perfectly likely that the learned algorihm can outperform humans and even if a human could simulate any polynomial algorihm in polynomial time, running it on a computer can of course be much faster so that it really makes a difference in practice (consider e.g. reactive systems with a limited response time).

    ReplyDelete
  9. I believe a computer can learn how to multiply if you program it to look at the data the way humans do. That is, a human looking at these triples of numbers wouldn't just "train a neural net" on them, he would apply a finite list of techniques he's familiar with. Even if these techniques, for the sake of argument, don't include multiplication explicitly, they really should include taking logarithms because so many important dependencies linearize after that, and discovering and training a linear dependency is easy. Interestingly, this dependency also linearizes. :) And computers would still be better than humans in taking logs of large numbers and discovering the linear dependency after that, of course.

    Factoring fails this test -- there is no simple technique, at least we don't know one.

    ReplyDelete
  10. As long as you express numbers as their prime factors multiplication and factorization are both extremely easy. [2,3,3,3,9] x [7,7,7] = [2,3,3,3,7,7,7,9]

    But now addition is very hard.

    ReplyDelete
  11. I'll echo the others: why should factoring be learnable if multiplication is?

    Besides, I'm not convinced that multiplication is not learnable. Represent things as bits, and break the problem down into a series of problems that involve learning the ith bit of the output. Learning to predict the low-order bit of the output to be trivial. Moreover, I would expect the same to hold for arbitrary bits of the output (at least in the PAC sense) given enough samples.

    ReplyDelete
  12. Actually, you can learn it: http://jmlr.org/proceedings/papers/v28/livni13.pdf (you can apply the same technique for regression)

    ReplyDelete
  13. Lance observes  "When you look at what machine-learning seems to do moderately well […] these are things that humans with a reasonable amount of training, can do very well. […] Is this some philosophical argument that our brain works like machine-learning algorithms?"
    ------------------------
    Perhaps a more accurate phrase than "philosophical argument" would be "heuristic evidence", in light of Ed Wilson's thesis (which gravely angered many academic philosophers):
    ------------------------
    Wilson's Thesis  "Much of the history of philosophy can be fairly said to consist of failed models of the brain"
    ------------------------
    Is it conversely true, that transformational advances in philosophy (or mathematics? or computer science?) commonly arise from new models of cognition?

    Not very many philosophers (or mathematicians, or computer scientists) would embrace that thesis! And yet it is remarkably easy to conceive arguments in favor of it. E.g, modern category theory and algebraic geometry (etc.) can be appreciated as arising from a cognitive reconsolidation that was initiated in the 1950s by Eilerberg, Mac Lane, Grothendieck, Serre, etc.

    Perhaps one broad theme of Lance's post (as I read it) is that 21st century computer science is no different. That would be fun! (and good news for young researchers too) :)

    ReplyDelete
  14. Wow, I admire John's long ... long winded responses. While detailed, it would be nice if he could summarize his points more succinctly. It's really amazing how long on average these responses are.

    ReplyDelete
  15. Yes, machine learning is computers gradually learning to do things people can do, in particular things that involve pattern recognition and categorization.

    Remember, machine learning used to be called "artificial intelligence".

    ReplyDelete
  16. Burton requests: "It would be nice if [JAS] could summarize his points more succinctly"
    ----------------
    Thanks! That's fun!

    Proposition  Naturality & concision: choose one.

    Corollary 1  Connectomes can't be simple.

    Corollary 2  Top-down AI must fail.

    ReplyDelete
  17. Does genetic programming (or "more generally", symbolic regression) not count as a ML technique?

    ReplyDelete
  18. Dear Professor,

    I would like to share a comment for the searching of one-way function. I hope you like it!!!!

    https://plus.google.com/112389322263940335368/posts/cz5yba1obXk

    ReplyDelete
  19. I was wrong about machine learning multiplication.

    ReplyDelete
    Replies
    1. Lance, how do we know Google isn't using a multiplication function as part of their AI implementation for spreadsheets -- "cheating." For instance, an AI that can select from known operators such as addition, multiplication, division, etc would almost be expected from an AI algorithm intended for practical use on spreadsheets.

      What do you think Lance?

      Delete