Monday, June 24, 2013

Quantum Tecniques/Gen Functions- don't be afraid of new techniques

Ronald de Wolf gave a GREAT talk at CCC on the uses of Quantum techniques to Classical Problems. He made the analogy of using the Prob Method to prove non-prob results. This reminded me of the following false counterarguments I've heard about new techniques:

  1. The Prob Method: Isn't that just counting?
  2. Kolg complexity: Isn't that just the Prob Method?
  3. Information complexity: Isn't that just Kolg complexity?
  4. Counting: Isn't that just Information Complexity?
In all of the cases above the complaint is idiotic--- while one COULD translate some (all?) proofs using Prob Method to Counting, it is easier to think in terms of Prob. The translation would be harder than just getting used to thinking probabilistically. By coincidence I spend some of my time at CCC looking for simple examples of generating functions where it would be difficult to do it any other way. I found one and liked it so much that I did a write up FOR YOU MY READERS! I suspect that it COULD be done using just algebra (or something) but you wouldn't want to. Here is the theorem and a link to my write up:
(Schur's Theorem) Let a1,a2,...,aL be denominations of coins such that no number ≥ 2 divides all of them. Then, for large n, the number of ways to make change of n cents is
nL-1/((L-1)! a1 a2 ... aL) + O(nL-2)
For full proof see here. My writeup is based on that in Wilf's book generatingfunctionology (the title page really does use a small g for the first letter).


The above was my INTENDED POST. However, when I showed the Gen-function proof of Schur's theorem to some students, one of them, Sam (a HS student), came back the next day with a purely combinatorial proof. It was not completely rigorous but I am sure that he and most of my readers, could make it so without too much effort. While having two proofs (Gen-function and Combinatorial) is MORE enlightening for me and for my readers, it does dampen my point that this is a theorem for which the gen-function proof is easier. I COULD argue that the gen-function proof did not require as much cleverness, or that once you put in the rigor it is harder, but I don't really have confidence in those arguments. I include the combinatorial proof in the writeup pointed to above. Which proof is better? A matter of taste. However, I hope you enjoy both of them!

9 comments:

  1. I think this is the usual conflict between relatively mechanical proofs that are easier to derive (generating functions) and "magic" proofs that are easier to understand (combinatorial). The former are better for authors; the latter are better for readers.

    And readers are more important than authors.

    ReplyDelete
  2. @Jeff Erickson: even though a proof can be argued to be better when the reader's job is easier, I also have the feeling that "magic" proofs should not be always preferred — including and having them is nice, but for the sake of research, a "mechanical" one is often very useful to the reader as well — it'll give him tools and ideas about how to derive other results.
    While a ad hoc, very clever and seemingly magic proof is beautiful, very nice, but too often leaves the reader with the impression that not only taking the same approach for another problem is impossible, but even that you need to be a god-like genius to even prove anything.

    ReplyDelete
  3. Clement: Agreed. I feel "reader" should be changed to "reviewer" in Jeff's comment :)

    ReplyDelete
  4. Here is maybe a counterexample. For some time, it has been possible to perform "non-standard" real analysis using genuine infinitesimals. This is certainly more intuitive than standard analysis. But, for the most part, people have been content to work with standard analysis and translate all their infinitesimal intuition into the "standard way." Dealing with the new apparatus is just too cumbersome.

    ReplyDelete
  5. I've frequently complained to Bill that most proofs that used Kolmogorov complexity could be rephrased as probabilistic arguments.

    Does it make a difference? To me it does, because I find probabilistic arguments easier to understand and more intuitive. But that is subjective, I guess. Slightly more objective arguments:
    (1) I find that the probabilistic argument tends to be shorter than the Kolm. proof. (In particular, proofs using Kolm. complexity sometimes have to set up a lot of notation/definitions first.)
    (2) I think one can assume the reader has the needed probability background, but not necessarily the Kolm. background.
    (3) Often, I find that it's not just one technique vs. another, but it ends up being *the same proof*, just that phrasing it in terms of Kolm. complexity obscures (at least to me) what is really going on. I wonder if sometimes people using Kolm. complexity aren't doing it to make their proof look more difficult than it really is.

    I don't claim the above hold always, but I do claim they hold most of the time Kolm. complexity is used.

    ReplyDelete
  6. I've frequently complained to Bill that most proofs that used Kolmogorov complexity could be rephrased as probabilistic arguments.

    ...and most proofs using integral calculus can be rephrased using manual limits of Riemann summations. Down with calculus!

    I wonder if sometimes people using Kolm. complexity aren't doing it to make their proof look more difficult than it really is.

    My experience with K.C. is the complete opposite. Often a convoluted counting/probabilistic argument that makes the result look impressive devolves into a one liner: "the trace of the computation can be used to reconstruct the input and hence on a Kolmogorov random input the time taken is Omega(n)". No setup, no notation, no definitions.

    If the techniques above were merely restatements of each other no new problems would have been solved by them. Yet we do know that the exact opposite is the case. The new shorthands introduced by each of the methods listed by Bill above allowed us to solve important open problems.

    ReplyDelete
  7. I wonder what you think about Euler's recurrence for the partition numbers p(n) (= number of ways to write n as the sum of nonincreasing positive integers). There is a purely algebraic generating function proof, there is a very beautiful bijective proof by Franklin of a key lemma, the Pentagonal theorem, which gives the recurrence via a very simple generating function manipulation, and then there is an explicit bijection using partition rank and Dyson maps. I think the mix of Franklin's bijection and the little bit of generating functions is the best proof: the entirely bijective proof maybe gives the best insight but is a bit complicated, while Franklin's bijection gives a very clear idea why the Pentagonal theorem is true. Here is a nice survey of results about integer partitions: http://www.math.ucla.edu/~pak/papers/psurvey.pdf

    ReplyDelete
    Replies
    1. My first thought: I looked at the survey- alas, so many papers to read, so little time.

      My second thought: You raise the intersting question of
      `what is the criteria for the best proof', which my post also did.
      Uses the most elementary techniques?
      Most insightful?
      Shortest (not my criteria but others use it)?
      Does not require cleverness?
      Is constructive (informally)?

      While insightful seems like the obvious answer it may not be if the proof is rather long before you get the insight.

      Delete
  8. The problem with Sam's proof is that it doesn't use the necessary condition involving the greatest common divisor, nor even indicate why it comes up. It can, however, easily be extended to rigorously show that the *average* number of solutions for nearby n is as stated in the theorem.

    I graded a course last fall where this problem was assigned. The students came up with many inventive proofs, including the "intended" generating function proof. My favorite used the observation that the number of ways of making change for n and for n+a is "of the same order" if a is one of the coin denominations. (The cleanest proof seems to be by induction on the number of denominations so that you can justify claims about orders of growth.) By using Bezout's identity, it follows that the number of ways of making change for n and n+1 is "of the same order". (This solves the main deficiency in Sam's proof.)

    The proof may be finished in any number of ways, now that we know the number of solutions doesn't depend on any "local" properties of n. For example, we may compute the total number of ways of making change for all m up to n, whether combinatorially or as the volume of a certain simplex in L-dimensional space. Differentiating this function, whether literally or using a combinatorial averaging argument, finishes the proof.

    ReplyDelete