As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Adversarial benchmark construction, where harder instances challenge new generations of AI systems, is becoming the norm. While this approach may lead to better machine learning models —on average and for the new benchmark—, it is unclear how these models behave on the original distribution. Two opposing effects are intertwined here. On the one hand, the adversarial benchmark has a higher proportion of difficult instances, with lower expected performance. On the other hand, models trained on the adversarial benchmark may improve on these difficult instances (but may also neglect some easy ones). To disentangle these two effects we can control for difficulty, showing that we can recover the performance on the original distribution, provided the harder instances were obtained from this distribution in the first place. We show this difficulty-aware rectification works in practice, through a series of experiments with several benchmark construction schemas and the use of a populational difficulty metric. As a take-away message, instead of distributional averages we recommend using difficulty-conditioned characteristic curves when evaluating models built with adversarial benchmarks.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.