As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
With increasing expectations for flexibility and adaptability of machine learning systems, the importance of automatic model updates and performance stability in the face of various types of concept drift has received significant interest. In this study, we explore how feature selection techniques, mostly neglected in the aforementioned effort, may be used to improve the drift compensation process with no a priori assumptions regarding the type of drift. To this end, we (A) evaluate several feature selection techniques by their potential to minimize the effect of drift while still capturing its essence (predict its near-term course), (B) analyze the factors contributing to the success of our proposed method, and (C) provide empirical drift adaptation results via active learning on an extensive data set of real-life political indicators. The results demonstrate that using L1 regularization in the context of our new sample-driven drift-modeling approach results in improved performance as compared to alternative feature selection techniques. The reduced model also requires fewer additional samples to recover from drift even with existing active-sampling strategies.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.