Carl Smith claimed that COLT was made possible because of THREE strands of learning theory coming together to form a workshop and later a conference: (1) Inductive Inference (Computability Theoretic Learning), (2) PAC learning, (3) Query Learning. This seems right ot me, but even early on Inductive Inference had far less papers than the other areas. The conference has evolved since then and there are types of learning that were not known back then. Inductive Inference seems mostly gone. There is one paper on it (Kotzing-Case paper Strongly Non-U-Shaped Learning Results by General Techniques). Also, it looks like there is nobody on the Program Committee in that area.
Part of the reason Ind. Inf. is gone is that ALT Algorithmic Learning Theory (ALT) has taken up some of the slack--- there were 4 papers in Ind. Inf. and 2 people in the area on the Program Committee. ALT is actually a pretty big learning theory conference since its actually ALT/DS -- two learning theory conferences that are at the same time and place. See here for info on DS. But looking at ALT as the reason may confuse cause and effect. Was ALT founded partially to take those papers that COLT no longer took?
It has been noted before that CCC does not take papers on computability-theoretic Complexity. Similarly, COLT does not take papers on computability-theoretic Learning. Such papers in Learning theory still have an outlet: ALT. Does CCC still have an outlet for such papers? Perhaps Computability in Europe (CIE).
Why do certain subfields of a field die out and others live on? I abbreviate Ind. Inf. by Ind. Inf. I abbreviate Computability-Theoretic Complexity Theory by CTCT. I define CTCT rather narrowly: arguments similar to those in Computability theory- reductions, constructions of weird sets.
- If a subfield does not really connect up to the main field then it may die. True for Ind. Inf. even at the beginning of COLT. For CTCT there are some nice connections to complexity Theory, such as the very definitions of reductions and also Ladner's theorem. The results on Sparse Sets might be considered CTCT, but that's pushing it and that was a long time ago.
- If the field moves in a more practical direction, the more theoretical subfields may be left behind. True for Ind. Inf. For CTCT this is some true- but here practical means lower bounds on approx algorithms which CTCT has had no impact on.
- If a subfield runs out of questions of interest then it may die. True for Ind. Inf. Less true for CTCT. People in both fields might say that there are still questions of interest; however, if its only of interest to people in that subfield, that might not count. Though I agree that of interest is a slippery notion.
- The field finds other things more worth studying. True for Ind. Inf. as Learning Theorists got into other things. Some True for CTCT as concrete models, PCP, quantum, crowds the field out.