As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
We propose an unsupervised learning algorithm for the estimation of the number of components and the parameters of a mixture model. It starts from a single mixture component covering the whole data set (therefore avoiding the ill-posed problem of the components' initialization, saving also computational burden). Then, it incrementally splits that component during expectation maximization steps, thus exploiting the full space of solutions following a binary tree structure. After each component insertion it evaluates whether accepting this new solution or discarding it according with the chosen information criterion. We show that the method is faster that state-of-the-art alternatives, is insensitive to initialization (deterministic initialization strategy), and has better data fits in average. This is illustrated through a series of experiments, both with synthetic and real images.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.