Bohr: Algebra's like sheet music, the important thing isn't can you read music, it's can you hear it. Can you hear the music, Robert?Oppenheimer: Yes, I can.
I can't hear the algebra but I feel the machine.
A Turing machine has a formal mathematical definition but that's not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. I don't really visualize or hear the machine, but it sings to me, I feel it humming along, updating variables, looping, branching, searching until it arrives at its final destination and gives us an answer. I know computers don't physically work exactly this way, but that doesn't stop my metaphorical machine from its manipulations.
I felt this way since I first started programming in Basic in the 1970's, or maybe even before that on calculators or using the phone or sending a letter. I could feel the process moving along.
So when I took my first CS theory class, I could feel those finite automata moving along states and the PDAs pushing and popping. I even felt the languages pumping. After graduate complexity, I knew I found my calling.
And that's why I prefer Kolmogorov Complexity arguments more than information theory. Information doesn't sing to me but I can see the compression.
As computational complexity evolved, focusing more on combinatorics, algebra and analysis, and often on models that have little to do with computation, I sometimes find myself lost amongst the static math. But give me a Turing machine and I can compute anything!
"... Information doesn't sing do me but I can see the compression."
ReplyDeletesing to me or do me?
Is this a typo or I'm missing something? located at (n-1)th paragraph.
great read!
I meant "to me". Fixed.
Delete'Feeling the machine' is awesome. I believe this 'operational' way of doing mathematics on a completed infinitely large object/tape/matrix (that is, a procedural way of thinking, yet with a static ontology) is due to Turing et al., and especially Cantor.
ReplyDeleteIn my understanding, Cantor's famous diagonal argument contains a procedural component that explains how to construct the anti-diagonal. It pertains to a completed matrix, which represents a full-fledged bijection between finite strings and bit sequences. This, essentially, presents a Platonic backdrop.
So, arguably, we see Cantor -- and, later, Turing even more so -- apply causal reasoning (= feeling the machine) in a non-causal (Platonic) landscape.
Fusing the causality of a (mathematical) machine with the non-causality of (mathematical) Platonism is something Cantor was permitted to do, since he had idealistic leanings -- according to at least one Cantor scholar (i.e., Jane).
We need the static backdrop to diagonalize out of a completed class of, say, linear bounded automata. We employ the procedural approach to construct a mathematical machine situated beyond the full-fledged class.
This post wonderfully captures the essence of abstract mathematics in comp. science.
"Cantor's famous diagonal argument contains a procedural component"
DeleteAll mathematicians are somehow "cheating", they use procedural arguments while pretending that maths ontology is timeless.
This is especially clear with Russel paradox:
Okaaay, here is the set of sets which don't contain themselves (which sounds a pretty natural concept), Oh! Dang! Where do I do I stash it now?
CHEATING with time!!!
Without such "cheating" you could say goodbye to all diagonalizations, including Godel incompleteness and the halting problem. :-D
I don't disagree at all. I eschew the word 'cheating,' as I don't want to choose sides when studying the history of maths.
DeleteI believe Brouwer would have agreed with the "cheating" connotation; he went for purely procedural maths only (to put it very crudely). Wittgenstein and Poincare are other examples.
As a comp. scientist, however, I can appreciate the many diagonal arguments in the literature.
Diagonalization is a constructive process. Going from double negative to positive is the nonconstructive part.
Delete