As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
We investigate the teaching of infinite concept classes through the effect of the learning prior (which is used by the learner to derive posteriors giving preference of some concepts over others and by the teacher to devise the teaching examples) and the sampling prior (which determines how the concepts are sampled from the class). We analyse two important classes: Turing machines and finite-state machines. We derive bounds for the teaching dimension when the learning prior is derived from a complexity measure (Kolmogorov complexity and minimal number of states respectively) and analyse the sampling distributions that lead to finite expected teaching dimensions. The learning prior goes beyond a complexity or preference choice when we use it to increase the confidence of identification, expressed as a posterior, which increases as more examples are given. We highlight the existing trade-off between three elements: the bound on teaching dimension, the representativeness of the sample and the certainty of the identification. This has implications for the understanding of what teaching from rich concept classes to machines (and humans) entails.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.