In this paper, we present a new active learning strategy whose main focus is to have the ability to adapt to the unknown (or changing) learning scenario. We introduce the learners' ensemble based approach and model it as the multi-armed bandit problem. Presented application of simple exploration-exploitation trade-off algorithms from the UCB and EXP3 families show an improvement over using the classical strategies. Evaluation on data from UCI database compare three different selection algorithms. In our tests, presented method shows promising results.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 firstname.lastname@example.org
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 email@example.com