The following is a conversation between Clyde Kruskal and Bill Gasarch.

CLYDE: Bill, a student, Ian Roberts, asked me if there are any non-primitive recursive functions that people actually want to compute. (See here for a definition of Primitive Recursive. Also see here for Ackermann's function which is computable but not primitive recursive.)

BILL: Off hand I would say no, but non-prim rec functions DO come up in natural ways.

CLYDE: That's not what he asked.

BILL: Even so, that's what I will blog about. OH, one more thing, why does he want to know?

CLYDE: Ask him directly. (BILL then emailed Ian)

BILL: Good question about NATURAL problems that are not prim-rec, but why do you want to know?

IAN: Meyer and Richie proved (see here) that if you limit the control flow to IF statements, FOR loops with finite iterators then the class of functions you implement is exactly the primitive recursive functions. So I was wondering if I could avoid ever using WHILE loops since they are harder to reason about.

BILL: YES you COULD avoid ever using WHILE LOOPS; however, there are times when using them is the best way to go.

CLYDE: Bill, when is the last time you wrote a program? And LaTex does not count.

BILL: Good point. Rather than take my word for it, let's ASK my readers. I'll add that to my list of RANDOM THOUGHTS ABOUT non-prim rec functions.

SO, random thoughts on non-prim rec functions

0) Are there problems for which writing a WHILE loop is the way to go even though they are not needed?

1) HALT is not prim recursive and we want to compute it. Oh well. All future examples will be computable.

2) QUESTION: Are there simple programming languages so that HALT restricted to them is decidable but not primitive recursive? I suspect one could contrive such a language so I ask for both natural and contrived examples.

3) The Paris-Harrington Numbers from Ramsey Theory are computable and grow MUCH faster than prim rec. Indeed- they grow much faster than Ackermann's function. See Wikipedia Entry.

4) The Kanamori-McAloon Theorem from Ramsey theory is computable and grow MUCH faster than prim rec. Indeed- they grow much faster than Ackemann's function. See Wikipedia Entry. They are not as well known as the Paris-Harrington numbers. Hopefully this blog post will help that.

5) Goodstein's Theorem yields numbers that are computable and grow MUCH faster than prim rec. Indeed, they grow much faster than Ackermann's function. See Wikipedia Entry and/or my blog post on them.

6) QUESTION: Of PH, KM, GOOD, which grows fastest? Second fastest? Third fastest? Perhaps some are tied.

6) QUESTION: We know that GO and CHESS have very high complexity, but are still prim recursive. We know that there are some math games (e.g., the Hydra game) that are not prim recursive. Are there any FUN games whose complexity is NOT prim recursive?

7) Tarjan's UNION-FIND data structure has amortized complexity roughly O(n alpha(n)) where alpha(n) is the inverse of Ackermann's function. This is also a lower bound. See Wikipedia entry on disjoint-set data structure. QUESTION: Is Tarjan's UNION-FIND data structure actually used? It can be used to speed up Kruskal's MST algorithm, but that just takes the question back one step: Is MST a problem people really want to solve? I asked Lance and he asked chatty. For the results of that see here . The answer seems to be YES, though I wonder if the speedup that UNION-FIND gives is important. Union-Find is also used in the Hoshen-Kopelman Algorithm for (to quote Wikipedia) *labeling clusters on a grid, where the grid* *is a regular network of cells, with the cells being either occupied or unoccupied*. Other issues: (a) is UNION-FIND hard to code up? Lance tells me that it is easy to code up. (b) Is the constant reasonable?

8) Is the Ackerman Security Company called that since they claim that breaking their security is as hard as computing Ackerman's function? Unlikely- they spell it with only one n at the end. Even so, my class believed me when I told them that.

9) The finite version of Kruskal's Tree Theorem YADA YADA YADA not prim rec. Wikipedia Entry

CLYDE: You can't YADA YADA YADA my Uncle Joe!

BILL: It's my party and you'll cry if you want to, cry if you want to, cry if you want to. (See here for Leslie Gore's song Its my party and I'll cry if I want to which is not about non primitive recursive functions. Also see her sequel Judy's turn to cry. A much better song with a better message for teenagers is her You don't own me.)

CLYDE: Oh well. However, I'll make sure to tell that example to my class.

(ADDED LATER: Quanta had a nice article on non-primrec functions recently here. The problem they are talking about is discussed in the comments.)

The reachability problem for chemical reaction networks (equivalently, Petri nets or vector addition systems or "commutative semigroups") is decidable but not in any primitive recursive time bound (shown independently in two FOCS 2022 papers: https://arxiv.org/abs/2104.13866, https://arxiv.org/abs/2104.12695). It's the reachability problem on directed graphs defined by chemical reaction networks, where nodes are configurations and edges are determined by reactions.

ReplyDeleteFor instance, starting with configuration {1X} (1 copy of X), with possible reactions (1) X --> Y+Y+Y and (2) Y --> A+Z+Z, we can reach {3A,6Z} by doing reactions 1,2,2,2, but we cannot reach to {6Z}.

I find this to be a very natural problem, and it comes up repeatedly in studying these systems. (e.g., in verification of distributed systems: https://arxiv.org/abs/1703.04367)

For such natural sounding problems, the results seem to be very recent. Is there a reason for that?

DeleteYeah, because they are really difficult. :)

DeleteIt's fairly straightforward (once you've wrapped your head around the model and how it can simulate register machines, which can in turn simulate Turing machines) to show that the problem is PSPACE-hard, by directly simulating register machines with register counts bounded by an exponential function of the number of reactions, and register machines with exponential counts can simulate polynomial-space Turing machines.**

Lipton in the 1970's (https://www.cs.yale.edu/publications/techreports/tr63.pdf) showed a more complex simulation where the registers can have *doubly* exponential counts, which can simulate exponential-space Turing machines. For many decades hardness results were stuck there, but in the past few years there were some breakthroughs culminating in the two FOCS papers I mentioned.

It's perhaps even more difficult to show that the problem is decidable. That result has a contentious history, which many people claiming credit. Jerome Leroux won the Best Paper Award at the Turing Centential 2012 conference for giving a simpler proof of this result. (https://easychair.org/publications/paper/Blr) He also discusses the history of various proofs of this result in the intro of that paper.

**Here's an argument for PSPACE-hardness if you're curious.

You simulate a transition "increment register R and go from state A to state B" by the reaction A --> B+R, where you have exactly one molecule representing the state, and n copies of R to represent that register R has value n. The difficulty is simulating the decrement transition "decrement register R and go from state A to state B, UNLESS R is already 0, in which case go to state C instead". One is tempted to do this via a pair of reactions A+R --> B (in case R has positive count) and A --> C (in case R has count 0), but the fundamental difficulty is how to prevent the second reaction from occurring if R has positive count. The hard problem for reaction networks is detecting when some molecule is absent and enabling certain reactions only if it is, ability not built into the model.

A way around this, but that only works for register machines with counts bounded by 2^k, where there are O(k) total reactions, is this. First, represent register R by a pair of species R and R'. When you start, before incrementing R, increment R' to 2^k via reactions R1 --> R2+R2, and R2 --> R3+R3, ... R{k-1} --> R'+R'. Increment and decrement now convert between R and R' rather than simply creating and destroying R, preserving that R+R'=2^k at all times, e.g., to increment as above, use reaction A+R' --> B+R.

Then, to test whether R is 0, instead test whether R'=2^k via reactions

R'+R' <--> T{k-1}

T{k-1}+T{k-1} <--> T{k-2}

...

T2+T2 <--> T1

T1+A --> T1+C

which can produce T1 (which converts A into C) if and only if R' is at least 2^k (which means it is in fact equal to 2^k since it can't be larger). Notice the reactions are reversible in case R' has smaller count, so that we don't get stuck in the test making R' unavailable for increment operations; if you wait long enough eventually you'll get all the R' back through the reverse reactions.

Pardon my ignorance, but doesn't the chemical reaction reachability problem look very much like "Is string X in language L?" found in automata and regex?

DeleteSuperficially, these look like the reduction rules for a Context Free Grammar, and the register machine simulation a linear bounded Turing Machine.

DeleteBoth problems involve the notion of reachability or attainability. In chemical reactions, it's about reaching a certain state or configuration of molecules, while in automata, it's about reaching an accepting state when processing a string.

DeleteBoth problems can be formally represented and analyzed within the frameworks of their respective models (chemical reaction networks and automata).

but there are differences, Chemical Reaction Reachability: Involves the dynamics of chemical reactions and the evolution of molecular configurations over time.

Language Membership (Automata/Regex): Involves recognizing whether a given string belongs to a formal language defined by a set of rules (automaton or regular expression).

correct me please.

Chemical Reaction Reachability: The decidability of the problem is discussed, with Leroux's proof of decidability being mentioned. But Language Membership (Automata/Regex): Membership in regular languages is decidable, but decidability depends on the class of languages (regular, context-free, etc.).

https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington_theorem

ReplyDeleteParis-Harington --> Paris-Harrington

(This is Bill) Fixed, thanks. I used to call them `The Large Ramsey Numbers' but a professor who had Harrington as his advisor told me NO, call them the Paris-Harrington Numbers. So now I do and usually spell it correctly.

DeleteThe union-find algorithm has an application to decoding the surface code, an important quantum error correcting code. https://arxiv.org/abs/1709.06218

ReplyDeleteIan, while loops are exceedingly useful for expressing workloads where doing a workstep can create more work. Define a to-do queue, while it's not empty, process an element, if that element has follow-up work add it to the queue.

ReplyDeleteThe "classics" that use this are breadth-first-search and Dijkstra. You almost never have a problem that needs it but every once in a while you do.

A limited form is rules engines. I like Slither Link and Nurikabe and other logic puzzles like Sudoku but they often involve a large amount of “grunt work” at the large sizes where you get interesting logical structures that you actually have to think deeply about. I don't want to check the entire board for patterns because that makes the checks hard to write, rather I want my input to cause the computer to check for some simple local patterns that I define, like in Sudoku “does this square have only one place for a 3, then fill in a 3 there.” But then those need to “cascade” if and only if the computer fills in a value, to the locale of the new cell the computer used. But this should not interrupt checking the other contexts of this particular move that I made. So the best way I have found to express this sort of problem is to be able to define a sum-type of contexts (Sudoku: row 3 vs column 3 vs square 3 vs maybe the value 3, since Sudoku is abstractly a 9x9x9 cube of bits), rules that look for a certain context and then inspect it to see if they can simplify it, and when you make a move you push the contexts onto a queue. While the queue is nonempty, apply rules to that context, if the rule substitutes into a cell, then push that cell’s contexts onto the queue.

One other experience that I have had: sometimes the level of nesting of for-loops is itself a parameter, so if I want to generate all permutations of n values for example, the ideal imperative structure does not exist without hardcoding in n: I can give you all 4-permutations with 3 nested for-loops! So then you have to do like a “recursive plus for loop” mix that I don't like the look of... Or you can just create another queue! So for permutations this would be a queue of (Prefix, rest) tuples, while queue has items, pull out an item, if there's only one last item emit prefix ++ rest, else for each of the rest, jam it onto a copy of the prefix and insert those “todos” into the queue.

Point is from the moment you notice “argh I need to nest the for loops dynamically” you can almost always reach for this tool instead, essentially maintaining your own lightweight call stack.

For (2): the lower bounds mentioned by @Dave Doty are established through rather simple counter programs. They have a HALT instruction, whether they halt is decidable, but the complexity is Ackermann.

ReplyDeleteFor (6): IIRC, they're tied (all independent of Peano arithmetic, but not more)

DeleteFor (7): Union-find is also used in some unification algorithms, which are themselves used in first-order resolution and in type inference. I gave once a student assignment implementing type inference for an ML-like language with polymorphic types; union-find is rather easy to code and really efficient.

The Goodstein function and friends are easily proved total in the theory PA+CON(PA) where PA is Peano Arithmetic, so not that far up in theory space. In fact the Paris-Harrington theorem (that Goodstein's theorem is not provable) in PA directl follows from Goedel incompleteness and the observation that Goodstein's theorem implies CON(PA).

ReplyDeleteFor programming languages, I am pretty sure the answer is yes. I don't understand the proof but I believe there is a theorem that the functions expressible in Girard's System F (a useful version of polymorphic lambda calculus, something like a generalized ML without unbounded recursion) are exactly those provabably total in second order arithmetic. So that is much worse than PA (in fact worse than the finite versions of Kruskal's tree theorem or the Robinson-Seymour graph minor theorem) and presumably means much faster growth rates.

There is an unbelievably nerdy passage in the sometimes reviled fanfic "Harry Potter and the Methods of Rationality" (hpmor.com) that goes:

"Meself," Hagrid continued, "I think we might 'ave a Parisian hydra on our 'ands. They're no threat to a wizard, yeh've just got to keep holdin' 'em off long enough, and there's no way yeh can lose. I mean literally no way yeh can lose so long's yeh keep fightin'. Trouble is, against a Parisian hydra, most creatures give up long before. Takes a while to cut down all the heads, yeh see."

"Bah," said the foreign boy. "In Durmstrang we learn to fight Buchholz hydra. Unimaginably more tedious to fight! I mean literally, cannot imagine. First-years not believe us when we tell them winning is possible! Instructor must give second order, iterate until they comprehend."

The Buchholz hydra is described in the Googology wiki which you might like to look at:

https://googology.wikia.org/wiki/Buchholz_hydra

Finally the Busy Beaver function has been getting attention lately ;-). It is total, but grows faster than any recursive function.

try writing non-recursive version of quicksort with for loops (and don't cheat when using for loops).

ReplyDeleteWith fixed depth for loops you cannot do many simple algorithms.

> try writing non-recursive version of quicksort with for loops

DeleteIf there are n elements in the list, then replace recursion with a while loop and a stack, and any while loop like

while (condition) {

...

}

can be replaced with

for (int i=0; i<n; i++) {

if (!condition) break;

...

}

It's not that the number of iterations of each for loop is a constant, it's that an upper bound (in this case n) on the number of iterations can be computed prior to entering the loop. But that number can certainly depend on the program input.

What's interesting about the Ackermann function, defined with a very simple recursion A(m+1, n+1) := A(m, A(m+1, n)), which can be implemented with while loops, provably cannot be computed with for loops. The Ackermann function itself is essentially the upper bound on the number of iterations any for loops would need, but you can't calculate that bound without while loops.

The m'th row of the Ackermann function (i.e., the one-argument function you get by fixing the first argument m) IS primitive recursive, requiring m nested for loops: the first row is something like successor S(n)=n+1, the next row is something like addition, which is repeated successor, multiplication is repeated addition, exponentiation is repeated multiplication, etc. Each row you can use an additional nested for loop to iterate the computation done by the previous row. In fact one definition of primitive recursive is "computable in time upper-bounded by some row of the Ackermann function".

No, the issue is not the loop size but the number of needing of for loops you need.

DeleteIf you cheat in a way that any while loop can be turned into a for loop the original question becomes meaningless.

Do people want to solve non-NP problems? yes. Heuristic and fixed-parameter algorithms are examples. Practically limited version of software verification, SMT, ... are problems people want to solve.

ReplyDelete