Friday, March 02, 2012

Turing's Titanic Machine!

In the March CACM, Barry Cooper writes
We quote Peter J. Denning introducing the ACM Ubiquity Symposium on "What is Computation?" as saying: "Researchers in biology and physics have claimed the discovery of natural computational processes that have nothing to do with computers."
With Lance Fortnow distinctly underwhelmed: "Some people outside of computer science might think that there is a serious debate about the nature of computation. There isn't."
As often happens when experts disagree, the truth lies somewhere in between.
No it doesn't. My extreme point of view: A strong belief in the Church-Turing thesis that Turing machine captures the true notion of computation now and forever.

What's next? Casting doubt on 1+1=2? Sure no one has yet proved 1+1=3 but that doesn't mean it won't happen someday.

21 comments:

  1. It is by no means obvious to me that solutions to the Einstein equations are going to be computable.

    Do you know of a general proof?

    My guess is that problems in proving this will arise due to the possible formation of singularities. It suggests an amusing strategy for attempting to find candidates for non-computable processes: try to show that the problem of determining whether a singularity forms is non-computable (maybe we can encode the halting problem somehow?)

    ReplyDelete
  2. I understand that you may believe that all physical processes can be simulated on a Turing machine, but can you claim that this this is an analytic proposition akin to arithmetic?

    ReplyDelete
  3. Sure no one has yet proved 1+1=3 but that doesn't mean it won't happen someday.
    I know many, but all of them devide by 0 at some point.

    If you're speaking of correct proofs and you accept the Peano axioms as given, how should it be possible to prove 1 + 1 = 3? (With the common meaning of the symbols, of course)

    ReplyDelete
  4. @Martin Thoma: It's called sarcasm.

    ReplyDelete
  5. yes, and the universe is a set of nested spheres with the sphere of fixed stars around 80 million miles away

    ReplyDelete
    Replies
    1. all science is right until it's wrong

      Delete
  6. @ Martin Thoma: It has been proven alredy that all mathematical systems are inconsistent. It seems you are not a regular follower of this blog. Check: http://kamouna.wordpress.com.

    Best,
    Rafee Kamouna.

    ReplyDelete
    Replies
    1. Please don't go to that kook site and its one post.

      Delete
  7. Well, there are all sorts of models of computation that are fit for specific purpose, for example those which models concurrency, etc. Theoretically a lot of things could be made equivalent, but some are much easier to work with for practical reason.

    That said, I don't know who Denning is but from what he propose he sound like a big crank to me. However I'm not too surprised: "Ubiquity" is certainly not the only "new" area invented to create "more jobs". This is what everyone do nowadays, what is the big deal?

    ReplyDelete
  8. "Researchers in biology and physics have claimed the discovery of natural computational processes that have nothing to do with computers"

    My guess is that by this they really mean "nothing to do with digital computers consisting of CPU and motherboard and computerchips and keyboard and..."

    A computer is what computes: that is tautological. If Denning is saying that physicists/biologists have found things that contradict the Church-Turing thesis, then that'd be a huge result, far bigger than Wiles proving FLT. I highly doubt that's what he meant.

    ReplyDelete
  9. Oh boy! A chance to apply Forder's Principle:

    -------------
    "The virtue of a logical proof is not that it compels belief but that it suggests doubts. The proof tells us where to concentrate our doubts."

          — Henry George Forder (1927)
    -------------

    If we assign central relevance to Lance's Postulate, that "Turing machines capture the true notion of computation now and forever", perhaps we can constructively adjoin to it Forder's Corollary: "Computation does not capture the true notion of cognition, now or ever."

    Forder's Corollary helps us appreciate how it is that we have conceived abundant, strong, rigorous insights regarding Turing Machines, and yet our conceptions regarding Strong AI remain sparse, weak, and heuristic.

    ReplyDelete
    Replies
    1. Well, we know from TCS, that it's hard to reconstruct a circuit simply by feeding it inputs and watching its outputs. So in that sense, "brains \subset Turing machines" is 100% compatible with the Church-Turing-Deutsch thesis.

      Delete
    2. Aram, you know I love to triangulate tough problems (that's why Abraham Lincoln, Dwight Eisenhower, and Bill Clinton are three of my favorite politicians). So lets consider three TM models of human cognition:

      TM1: The TM1 machine stores a set of rules, and executes a logic engine for making deductions from those rules. The main problem with TM1 models is that they don't (yet) exhibit anything like human cognition.

      TM2: The TM2 machine stores a set of set of Hamiltonian potentials (classical or quantum), and an engine to integrate those potentials into trajectories. The problem with TM2 models is that they're brute-force cellular-scale biophysics: they doesn't tell us how brains work in any integrative sense. As Hamming said "The purpose of computing is insight, not numbers."

      TM3: ??? Profit! :)

      Seriously, in Strong AI (as in many STEM fields) new paths forward would be very welcome. Perhaps this is one of those instances in which, as Tony Zee says, "a formerly cherished concept has to be jettisoned."

      Delete
    3. Hmmm … maybe I should add, that we can draw upon the CT work of Juris Hartmanis, and combine it with elements of Terry Tao's analysis of the Puzzle of the Blue-Eyed Islanders, to construct TM3's via a game called The Balanced Advantage Newcomb Game … a game that is inspired by (what else?) Newcomb's Paradox.

      The implication for strong AI of The Balanced Advantage Newcomb Game is a pun upon a saying of Wittgenstein:

      --------------
      "If a TM3 can speak, then we cannot explain how it can speak."
      --------------

      Delete
    4. I find these comments to be bizarre. After navigating through the misleadingly discursive mishmash, the principal objection seems to be that even if one finds a materialist model for cognition, it would not give us any "insight". This is tangential if not irrelevant to the question of whether brains \subset Turing Machines in the sense of Turing-Church-Deutsch.

      My personal view on this question is that we still know too little about the brain to jettison our "formerly cherished" materialist approach and that evidence from neurobiology strengthens this position everyday.

      Delete
    5. I'm not sure whether the following point-of-view qualifies as "bizarre." Juris Hartmanis uses the instead the word "radical", in arguing for a complexity-theoretic formalism for which any engineer has natural sympathy:

      -------------------
      "Results about complexity of algorithms change quite radically if we consider only properties of computations which can be proven formally. … Results about optimality of all programs computing the same function as a given program will differ from the optimality results about all programs which can be formally proven to be equivalent to the given program."
      -------------------

      Here Hartmanis' starting conception (which he discusses at length) is that even so basic a concept as "membership in P" in generally cannot be proven formally; hence oracles are essential to the definition of the class.

      If we embrace Juris idea, which greatly restricts the role of oracles in complexity theoretic proofs, the end result is a reclassification of the entire Complexity Zoo. The hope is that in the resulting zoo, separations among its inhabitants can be formally proved, thus bringing closure to complexity theory's long Groundhog Day.

      The Balanced Advantage Newcomb Game was conceived and constructed with a view toward allowing us a glimpse of one small corner of a Hartmanis-style alternative Complexity Zoo — in which there is no Newcomb Paradox.

      Elevator Summary: When the standard complexity classes are restricted to include only algorithms whose membership is provable, the problem of proving separations alters radically — and it may be that these alterations are enabling of proofs. And yet the Balanced Advantage Newcomb Game shows us that there exist deterministic TMs, whose computational output we regard as "human", that reside outside these classes.

      Delete
    6. LOL … to express the same point in one sentence, the chief tenet of Fortnow-ism — “Every Complexity Zoo species is a TM” — is entirely consistent with the chief tenet of Hartmanis-ism — “The Zoo's natural classes are oracle-independent.”

      Moreover, it's excellent news for younger STEM researchers that in the celebrated phrase of Wolfgang Pauli: Only technical details are missing!  :)

      Delete
  10. I look at discussions on hypercomputation and I find the following as relevant:

    http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf

    I like the article because it brings to mind such impossible tasks such as squaring the circle. In that case it is possible to create an approximation that can be computable, as mentioned in the wikipedia article:

    "Approximate squaring to any given non-perfect accuracy, in contrast, is possible in a finite number of steps, since there are rational numbers arbitrarily close to pi" http://en.wikipedia.org/wiki/Squaring_the_circle

    I think this points to the practical possibility that if one reduces the precision of a particular logic, then it is possible come to answers to a problem that may or may not be true, but can be quickly verified. Self verifying theories are possible when arithmetic weaker than Peano arithmetic is allowed (http://en.wikipedia.org/wiki/Self-verifying_theories).

    This may point to a suitable model of the human mind as one that requires a certain level of imprecision, suggesting that biologic systems are not in any way superior calculators, but just more resourceful in the sense that it can be more efficient to quickly approximate and verify then to come to an exact solution directly.

    ReplyDelete
  11. Before you heard about quantum computers, how strongly did you believe (if at all) the strong Church-Turing thesis? (i.e. that randomized Turing machines can compute everything with not much more effort than nature uses.)

    ReplyDelete
  12. This is a very different question: STRONG Church-Turing thesis relates to polynomial time equivalence.
    It originally did NOT involve randomization.

    I strongly recommend folks to read both Lance's and Davis' papers. They make their points clearly, and deal with nonsense almost charitably, and do a much better job at explaining why they are nonsense than I'll try below.

    The Church-Turing thesis is a postulate. It is not logically impossible to suppose that there is a plausible alternative. Unfortunately,all alternatives suggested so far are plain wrong.

    One can feed noncomputable inputs to perfectly nice systems and -- surprise! -- the outputs may be uncomputable. The same effect can be obtained by making part of the system have noncomputable parts. Again, hardly a great discovery. All of this can be obfuscated by the fact that finite initial segments of noncomputable numbers are finite, and hence each initial segment is computable.

    Another set of objections is that computers do things that are not easily described as computations: operating systems, face recognition, automatic driving. Still, the underlying computational process can be described by a Turing machine. Analogy: we know that protein folds somehow, yet we do not have a good efficient computational model for it. Biologists have learned from past mistakes of "Natural History" when "flogistons" were invoked to "explain" fire, and "spontaneous generation" to "explain" life. Biologists do not invoke mysterious entities to explain protein folding. Why would us, CS people, want to introduce some form of "post-Turing" model to explain processes that can, in principle, be modeled by Turing machines?

    A mathematical relation of the "flogiston computing" models discussed is the use of precise models (for example, communicating asynchronous finite automata) whose behavior is noncomputable. The argument then is that since such systems can be built, they are a model that is "more powerful" than Turing machines. This seems plausible, until one realizes that
    1. Any finite initial segment of such a computational process -- in other words, anything we can actually compute with the device -- is also computable in the Church-Turing sense.
    2. Turing machines have noncomputable behavior. in this sense. The Halting Problem is "computable" in the same sense as the behavior of these devices.

    ReplyDelete
  13. For my part, quantum computers are attempting to bring complex numbers into computation in a natural way. This brings the question of our ability to handle infinities. Abstractly I can think of a Turing machine as a unit projective sphere (Riemann Sphere) where a vector indicates the current machine state and its head position (e.g. a complex number). The action is simply an operation on the complex number. Since Turing thought that operations where inseparable from the state information one can see that a Turing machine is an algebraic structure.

    One encounters the same problem with infinities that we see in quantum mechanics which requires the partitioning of the complex plane. In the Turing machine, that partitioning is implicit since we typically use countable sets of cells and symbols.

    One could possibly propose that we somehow build a Turing machine with a head and tape of infinite precision, but there would always be an uncountable set of inputs and outputs that would not be representable in a nice reduced form like pi and e, and would thus take infinite amounts of time to represent. This gets to the point of non-computable functions.

    If we enumerate the partitions of complex plane, and then try to use those enumerated elements to describe every number on the plane, we fail. There simply are not enough elements to index every number (a la cantor). Those numbers are simply not computable. If one decides to incorporate the new number, then one is effectively repartitioning.

    So here lies the rub. Any practical realization of hypercomputation requires some type of partitioning which effectively reduces the hypercomputer to a normal computer. This has nothing to do with a lack of imagination but simple physical realities.

    For the quantum computer, while the computer itself might be in a superposition of states, the ultimate input and output are definite finite states, so in principle, given enough time, there is some set of operations that a non-quantum computer can perform that would reach the output state based on any input state.

    Those are just my thoughts at the moment.

    ReplyDelete