

Autism is a neurodevelopmental disorder characterized by deficits in social, interpersonal interaction and communication skills. A generalized facial emotion recognition model does not scale well when confronted with the emotions of autistic children due to the domain shift inherent in the distributions of the source (neurotypical) and the target (autistic) population. The dearth of labeled datasets in the field of autism exacerbates the problem. Domain adaptation using a generative adversarial model (GAN) counters this disparity by creating an adversarial model that aligns features of the source and target domains using adversarial training. This paper looks at building a facial emotion classifier model that can identify the idiosyncrasies associated with an autistic child’s facial expression by generating feature-invariant representations of the source and target distribution. The objective of the paper is two-fold – a) build a discriminative classifier to identify the emotions of autistic children accurately b) to train a feature generator to produce an invariant feature representation of the source and target domains taking into account their similar yet different data distributions, in the presence of unlabeled target data. Investigation into automatic recognition and classification of the facial expressions of the autistic population has not been pursued extensively vis-a-vis a neurotypical population due to the complexities associated with eliciting and interpreting data obtained from autistic children.