Friday, January 09, 2026

Computational Depth

I'm posting from Oxford University where I will be spending the "Hilary Term" (through late March) as a visiting fellow at Magdalen College. If you are relatively local, reach out if you'd like to connect.

I plan to get back into research after thirteen years of administration, working primarily with Rahul Santhanam and his group. I haven't had a significant sabbatical or leave since Amsterdam 30 years ago, which is what comes from changing jobs too often.

Today I'd like to talk about a 2006 paper about a topic I first thought about in Amsterdam and will likely play a role in this visit, Computational Depth: Concept and Applications by Luis Antunes, Dieter van Melkebeek, Vinod Variyam and myself. 

In Amsterdam, I was hosted by Paul Vitányi and Harry Buhrman at CWI, and naturally worked on Kolmogorov complexity, the algorithmic study of randomness, as well as various problems in computational complexity.

Very simple string strings don't have much information. Completely random strings have maximal Kolmogorov complexity but not particularly useful either as we can create our own random strings by flipping coins. Is there some way to measure useful information? 

In particular I had this question: If NP reduces to a sparse set then it has polynomial-size circuits. If NP reduces to a random set then it has polynomial-size circuits. Is this just coincidence or is there a broader principle here?

Charlie Bennett developed a notion of logical depth that has a similar philosophy but it is a difficult definition to work with. Luis Antunes, a student from Porto, came to work with me at NEC Research in New Jersey when I was there around the turn of the century. We developed this simpler notion of computational depth as the difference of two Kolmogorov measures, like time-bounded Kolmogorov complexity minus traditional Kolmogorov complexity. This would be small for both very easy strings and full random strings. With then NEC postdoc (and now Nebraska professor) Vinod Variyam, we found a connection to finding SAT witnesses and with DIMACS and IAS postdoc Dieter van Melkebeek (now Wisconsin professor), we came up with the notion of shallow sets, that generalized both random and sparse sets and, as hoped, if NP reduces to a shallow set than it has polynomial-sized circuits. Luis would title his PhD thesis "Useless Information".

Luis and I would later find a nice connection of computational depth to average-case complexity. 

Computational Depth had a resurgence of popularity with the rise of meta-complexity, with 50 of the original paper's 90 citations coming since 2020.

So I hope to find new applications of computational depth working with Rahul who's at the center of meta-complexity. Maybe computational complexity can help us understand machine learning better based on my frozen concentrate analogy, where the learning process removes the randomness, leaving structured information behind. 

No comments:

Post a Comment