Valiant developed the concept of PAC (Probably Approximably Correct) learning as roughly where a learner sees a small number of labelled examples from a distribution and with high confidence will generate a hypothesis that with high probability will correctly label instances drawn from the same distribution.
A strong learner has confidence close to 100%; a weak learner has confidence only slightly better than 50%. Schapire, using a technique called boosting, showed how to convert a weak learner to a strong learner. This is a wonderful theoretical result but the algorithm had problems that made it difficult to implement.
In their Gödel prize winning paper, A decision-theoretic generalization of on-line learning and an application to boosting, Freund and Schapire develop the adaboost algorithm that solves many of these issues and has become a staple of the theoretical and practical machine learning community.
Boosting has its own web site where you can find much more information about the algorithms and applications.
No comments:
Post a Comment