

Domain generalization (DG), which aims to learn a model that can generalize to an unseen target domain, has recently attracted increasing research interest. A major approach is to learn domain invariant representations to avoid greedily capturing all the correlations found in source domains caused by empirical risk minimization. Nevertheless, overly emphasizing learning of domain invariant representations might lead to learning overly-compressed domain invariant representations, causing confusion of different classes in a same domain. To address this limitation, we introduce a novel dynamic domain-weighted contrastive loss, which maximizes the subdomain differences between different classes especially those belonging to the same domain, while minimizing the average distance between the points of the convex hull of the aligned source domains. We propose Multi-source domain-adversarial generalization via dynamic domain-weighted Contrastive transfer learning (MsCtrl), a novel domain-adversarial generalization framework, which optimizes the distribution alignment of source and potential target subdomains in an adversarial manner under the “control” of the aforementioned contrastive loss. Extensive experiments based on real-world datasets demonstrate significant advantages of MsCtrl over existing state-of-the-art methods.