A few weeks ago Lance reported on the events from a complexity workshop he was attending at the Banff International Research Station, an institute similar to the International Space Station except that it's in Canada rather than outer space.
What Lance didn't mention was that at the exact same time, some important computer science conferences were taking place just down the street at the Banff Park Lodge. I was participating in the 17th annual Conference on Learning Theory (COLT) which was co-located with the International Conference on Machine Learning (ICML) and the Conference on Uncertainty in Artificial Intelligence (UAI).
Back when COLT stood for Computational Learning Theory (circa 1988-2002), the conference was known for focusing on the computational complexity of machine learning. Nowadays the conference still operates from a theoretical perspective, but, as the Conference on Learning Theory, the program covers everything from game theory and economics to kernel methods in addition to traditional PAC style results.
A PAC style result that appeared in COLT 2004 which may be of interest to the readers of this web log is Polynomial-Time Prediction Strategy with Almost Optimal Mistake Probability by Nader Bshouty. His paper solves a problem in the online learning setting. In this setting, an unknown function f is chosen from some fixed concept class and at time t a learner is presented with an input x chosen according to some distribution D.
The goal of the learner is to run in time polynomial in log t (and all of the other relevant parameters) and predict the value of f(x) with mistake probability O(1/t) (Haussler, Littlestone, and Warmuth proved that this mistake probability is optimal). Bshouty shows that if the concept class is PAC learnable, then there exists an online learning algorithm which runs in time polynomial in log t and achieves mistake probability O(log t/ t). His algorithm has an exponential improvement in running time as previous solutions ran in time polynomial in t. The algorithm works by iteratively creating a branching program based on hypotheses considered at previous time steps. A boosting-type procedure dictates which branch to take for any new input.
In the spirit of being controversial (Lance asked me to be controversial), I could discuss the pros and cons of the change from Computational Learning Theory to Conference on Learning Theory (as a concrete example of differences in the community, about half the participants at the COLT business meeting wanted to see COLT co-located with STOC in 2006-- University of Washington folks make your move-- and the others wanted to co-locate with ICML). I'll leave that, however, for my faithful readers to debate. I will point out that it's hard to argue with the increase in attendance at COLT over the last few years.
No comments:
Post a Comment