Monday, December 05, 2011


On Saturday, Terrence Fine gave a talk on probability at a workshop at Northwestern. Before the talk he asked who thought probability was subjective (an individual's belief in the chance of an event) or a frequentist (a probability represents what happens if an experiment can be repeated many times). Someone noticed I didn't raise my hand either time so I said that I had a computational point of view of probability, since I have a computational point of view of everything.

I didn't mean computation as in Turing machine but as a process. A process that creates events according to some distribution. How does this process work? I don't care. That's the beauty of computational thinking, we abstract out the notion of probability and just make use of it. I've written papers on quantum computation having no idea of the physical processes that create entanglement. I study nondeterministic computation where we have no physical counterpart. Probability works the same way, at least for me.

My contribution to the workshop was to explain Kolmogorov complexity, the universal distribution and its relationship to inductive learning to the mostly economics crowd. Perhaps I could have explained things a bit more clearly as one econ student said to me afterwards "You lost me at prefix free". 


  1. What an odd question. And everyone else picked one or the other? I wouldn't have picked either option either.

  2. You are free to define probability in any way you like, but the important thing is to show that the concept that you use is roust enough to support a statistical methodology (Savage had it right when he called his book "The Foundations of Statistics" rather than "The Foundations of Probability"). Both the subjectivists and frequentists have methodologies that are derived from their concept of probability, which allow you to draw inferences from data, e.g. what can you say about the bias of a coin after observing several coin flips? Preferably, you should be able to infer, based on your definition of probability, that the statistical methods that we actually use in science lead to at least approximately correct conclusions. Much of the debate in the foundations of probability is concerned with exactly this question.

    Without a statistical methodology, an interpretation of probability is useless. You might as well say that the moon is made of various kinds of cheese and that the probability of any event, whether or not it involves the moon, is a ratio of the volume of one of the kinds of cheese to the total volume of the moon. That satisfies the Kolmogorov axioms as much as any other definition, but it is useless because it does not allow me to make inferences based on data (and because the moon is not actually made of cheese).

    Therefore, I would like to ask you how we can understand statistics from your computational point of view?

    BTW, Terrence Fine's 1973 book on the foundations of probability is a classic, well worth reading.

  3. Isn't your computational point of view just a formalization of the frequentist position? I.e., if you have a process $P$ that maps $n$ "random coins" to some space of outcomes, then the probability of outcome $o$ is exactly $|P^{-1}(o)|/2^n$.

  4. The question of Fine, is within the context of philosophical theories of probability (Donald Gillies has a book over the subject).
    Mathematicians (and computer scientists), are not concerned with these questions, because the starting point for them is a "given" probability space, which consists of a set, a \sigma algebra over the set, together with a "given" probability measure, P.
    Then the mathematical methods are used to find for example the probability P(A) of some event A;
    and the task of the mathematician is done.

    After that, P(A) should be interpreted using the same theory that is used to interpret P.

  5. Another way to describe the ideology of belief in probability is to repeat Suppes' argument in 1956 that, "Because of the many controversies concerning the nature of probability and its measurement, those most concerned with the general foundations of decision theory have abstained from using any unanalyzed numerical probabilities, and have insisted that quantitative probabilities be inferred from a pattern of qualitative decisions."

    Arnold Koslow paraphrased this to represent Newton's presupposing concepts of magnitude, and of relation in respect to magnitude, when understanding by number the relation in the abstract between any given magnitude and another magnitude of the same kind taken as a unity.

    "Because of the many controversies concerning the nature of length (mass) and its measurement those most concerned with the general foundations of geometry (mechanics) have abstained from using any unanalyzed numerical lengths (masses), and have insisted that quantitative lengths (masses) be inferred from a pattern of qualitative information."

    The understanding of mass at that time is like probability now; no one really understands the nature of probability.