Thursday, August 22, 2013

P = NP and the Weather

In the Beautiful World, my science fiction chapter of The Golden Ticket where P = NP in a strong way, I predicted that we could predict weather accurately enough to know whether it will rain about a year into the future. Besides putting Novosibirsk on the wrong side of Moscow, my weather prediction prediction has drawn the most ire from my readers.

Here was my thinking: Weather forecasting comes down to modeling. Find a good model, use the current initial conditions and simulate the model. P = NP can help dramatically here by making what should be the hardest part, finding the right model, easy. P = NP would help create much better models and should lead to far more accurate and deep forecasts than before. A year ahead prediction of weather didn't seem out of the realm of possibility.

As my readers point out, one cannot put in all of the initial conditions which would involve too much data even if we could get it, and small random events, the so-called butterfly effect, could dramatically change the weather in even a short period of time. Dean Foster, a Penn statistician, wrote me a short piece giving an analogy to a game of pool over time changed by the gravity generated by a single proton.

So how far can you predict the weather if P = NP? A month? Of course we'll probably never find out since I doubt P and NP are the same. In retrospect I shouldn't have put in such an aggressive weather forecasting because it detracts from other great things that happen if P = NP such as curing cancer.

10 comments:

  1. The link to Dean Foster's piece appears to be broken: It leads to a page that just says "Dropbox", with no other content. Permissions problem, perhaps?

    ReplyDelete
    Replies
    1. I'm not having trouble even if I log out of Dropbox but here is a link to the same file on Google Drive.

      Delete
  2. The weather example exposes several more marginal cases:

    Case 1  Future weather can be predicted by a machine that runs in P (as an oracle correctly assures us), but whose runtime is not provably in P. Is weather prediction effectively in P?

    Case 2  It turns out that future weather is random, but sampling the weather distribution is NP-hard. Bob's start-up company has a PTIME weather-simulation algorithm, that samples from a distribution that requires exponential resources (in space and/or time) to distinguish from the "true" NP-hard weather distribution. Is weather simulation effectively in P?

    Case 2B  Bob's start-up company has a PTIME quantum-simulation algorithm that (as with the weather-simulation algorithm) produces simulated measurements that are sampled from a distribution that requires exponential resources (in space and/or time) to distinguish from the "true" distribution of measurements. Is quantum simulation effectively in P?

    Case 3  A proof is found that PvsNP is undecidable in homotopy type theory (HOTT), but the proof does not extend to a stronger axiom system (ZFC, for example). Should the Clay Institute rescind its PvNP Millenium prize, on grounds that the problem is uninteresting? More broadly, how strong does an axiom system have to be, for a proof of PvNP undecidability in that system to cause us to conclude that PvNP is "really" undecidable, such that we abandon the search for a "meaningful" proof?

    The point is that these marginal questions can be settled only by (1) strong proofs and (2) refined definitions and (3) (possibly) adjusted axioms.

    Summary PvNP is a triple-threat problem, with plenty of real-world implications.

    ReplyDelete
    Replies
    1. As a followup to Case 2B, a recent preprint "Boson-Sampling in the light of sample complexity" (arXiv:1306.3995), by Christian Gogolin, Martin Kliesch, Leandro Aolita, and Jens Eisert, discusses John Preskill's notion of "quantum supremacy" in light of the same class of questions that Lance's weather example raises.

      The overall point (perhaps) is that 20th century researchers envisioned an utopian future of quantum supremacy in which quantum computers solved problems that provably were not in P. Now in the 21st century, we are coming to concretely appreciate the reasons why a future in which none of the envisioned elements of quantum supremacy are achievable — even in principle! — might be both more utopian and more realistic that the traditional quantum vision … and therefore more hopeful and interesting.

      Delete
  3. It may be that the optimal weather model is itself computationally intractable (beyond P or NP). Quite plausibly, one would need to model an exponentially large number of elements to obtain optimal estimates. The real question is, what is the information-theoretically-optimal weather prediction?

    ReplyDelete
  4. In 2008, Lance Fortnow's reviewer wrote to me:"A Turing machine cannot diagonalize against itself as the author claims he can do without proof. I urge the author to write the program for this machine to realize the impossibility of this task". The paper contained a Prolog meta-interpreter where every line of a program is a Turing machine that diagonalizes against itself (semantically), using Fuzzy Logic Programming.

    Since Fuzzy Logic Programming is a foreign area for complexity theorists. In 2009/2010, I modified my proof using the lambda-calculus Kleene-Rosser paradox which Laszlo Babai (ToC) rejected as the Kleene-Rosser paradox recognition problem is undecidable In 2011, Luca Trevisan (JACM) rejected a similar paper on the grounds that the Kleene-Rosser paradox is undefined.

    In 2013, I proved (in JACM submission) that

    1. KRP is defined iff KRP is undefined.
    2. KRP is decidable iff KRP is undecidable, over both the lambda-calculus and Turing machine.

    ==> All mathematical theories are inconsistent.

    It is (philosophically) trivial to realize in order to effectively model a set of infinite numbers, one needs a mathematical language whose alphabet MUST be infinite.

    See: http://kamouna.wordpress.com, for discussions with the FOM (Foundations of Mathematics) mailing list.

    Best,

    Rafee Kamouna.

    ReplyDelete
  5. Apparently a simple example can lead to very deep (weird?) thoughts and Lance was right that he shouldn't have used a difficult problem, like the weather, as an example but rather an easy one, like a cure for cancer. Seems to me that there is indeed a question of does there exist a model of reasonable size that can be used to predict the weather (looks a bit like an NP-problem doesn't it?). If it does, and P=NP, then we can find it. (Guess one; use it; don't toss it as long as it predicts correctly; seems a perfect poly-time guess-verify strategy, that must work provided such a model exists).

    The "Golden Ticket" is a nice, easy going, introductory, popular science book(let) about one of the most important questions of our time. As such, it is an unparallelled success. People shouldn't however take matters (and themselves) too seriously and dig deep into the text and dig up all sorts of ... let me just stop there.

    ReplyDelete
  6. --------------------
    Leen's Great Truth  "People shouldn't however take [complexity theoretic] matters too seriously and dig deep into the text [of complexity-theoretic narratives such as The Golden Ticket]"
    --------------------

    Leen, you have expressed a Great Truth whose dual Great Truth is worth contemplating,

    --------------------
    Leen's Dual  "People should take complexity theoretic matters very seriously and should dig deep into the texts of complexity-theoretic narratives [such as The Golden Ticket]"
    --------------------

    Dick Lipton and Ken Regan's recent Gödel's Lost Letter essay, titled "Move the Cheese", addresses some of the reasons why engineers in particular should embrace Leen's dual. And an even finer Leen-dual essay (as it seems to me) is Wendell Berry's Thomas Jefferson Lecture in the Humanities for 2012, titled "It All Turns On Affection".

    Berry's essay (as I read it) naturalizes and generalizes the foundational themes of Bell Thurston's well-beloved (and much-cited) essay "On Proof and Progress in Mathematics" (arXiv:math/9404236), and applies Thurston's themes broadly and wisely to some of the 21st century's urgent challenges.

    Summary  The STEM community's "cheese" has moved; far too many young STEM "mice" are starving for lack of family-supporting jobs; essayists like Wendell Berry and William Thurston speak to the nourishment that all mice (of all ages and all occupations) require; in particular the traditional STEM cheeses of "quantum supremacy and the complexity zoo" are destined to be supplanted by new, more abundant cheeses; if we are diligent, imaginative, daring, and lucky, perhaps these new STEM cheeses may even prove to be more tasty and nourishing than the traditional STEM cheeses.

    Conclusion  Seek new cheese bravely!

    ReplyDelete
  7. An NWP model solve a set of partial differential equations for which nobody knows how to compute an analytical solution. Almost all approximation algorithms (explicit and implicit time integrators) are polynomial in nature, but the time require to run them is large because

    - the timestep is limited by the CFL criterion for explicit method. Considering that the equations of the fluids can support acoustic and gravity wave (fast moving wave), the constrain can be very important. Even if an implicit method is used, accuracy (precipitation and turbulence duration can be short) impose a limit on the time step.

    - the grid required to run a simulation over the whole planet at high resolution (say less than 1 km in the horizontal and a few meters in the vertical) is very large. Problem get worse when a latlon grid is used because there is a singularity at the pole.

    - if you google about data assimilation methods like extended kalman filter or 4D-VAR, you will see that producing initial conditions are probably a place where P=NP would imply improvement in the models, but P=NP would change nothing about the main problem : the full covariance matrix is quite big and problematic to fit in the memory of a real computer.

    ReplyDelete
  8. I think the fundamental problem with using the example of weather prediction is that finding the right model in practice is not a problem in which complexity theory is very useful. In practice finding the right model, and mathematical modeling, in general, are not decision problems as we think of them in terms of complexity. (Also how would you verify if you had the "right model" a priori any way?) Mathematical modeling often requires very specific domain knowledge (eg weather, biology) as well as wise choice of the variables that capture that knowledge and their nature (eg discrete, stochastic, etc.) Where complexity theory is really useful in modeling is helping determine how quickly and efficiently as solution can be found if at all with known algorithms, or if new algorithmic work is necessary, and how many resources such as memory, CPUs, and time would be necessary to find a suitable solution for the model. This latter point on resources is extremely nontrivial because this could be the difference between having a model you can use meaningfully and not having one at all (i.e., it's a beautiful model, but you'll need to wait a few years to get any results.)

    ReplyDelete