As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Latent variables (LVs) represent the origin of many scientific, social, and medical phenomena. While models with only observed variables (OVs) have been well studied, learning a latent variable model (LVM) allowing both types of variables is difficult. Therefore, the assumption of no LVs is usually made, but modeling by ignoring LVs leads to learning a partial/wrong and misleading model that misses the true realm. In recent years, progress has been made in learning LVMs from data, but most algorithms have strong assumptions limiting their scope. Moreover, LVs by nature often change temporally, adding to the challenge and complexity of learning, but current LVM learning algorithms do not account for this. We propose learning locally a causal model in each time slot, and then local to global learning over time slices based on probabilistic scoring and temporal reasoning to transfer the local graphs into a latent dynamic Bayesian network with intra- and inter-slice edges showing causal interrelationships among LVs and between LVs and OVs. Examined using data generated synthetically and of ALS and Alzheimer patients, our algorithm demonstrates high accuracy regarding structure learning, classification, and imputation, and less complexity.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.