ORIGINAL RESEARCH article

Front. Big Data

Sec. Data Mining and Management

Fairness Across Domains: A Unified Fairness-aware Framework for Domain Generalization and Unsupervised Adaptation

  • 1. The University of Texas at Dallas, Richardson, United States

  • 2. Baylor University, Waco, United States

  • 3. University of Arkansas, Fayetteville, United States

  • 4. University of Florida, Gainesville, United States

The final, formatted version of the article will be published soon.

Abstract

Fairness in machine learning remains a critical challenge, particularly in the presence of domain shift. We propose a unified fairness-aware framework for both domain generalization (DG) and unsupervised domain adaptation (UDA), which jointly addresses domain shift and sensitive-attribute bias through disentangled representation learning. The framework disentangles content, style, and sensitive factors, and uses them to generate augmented samples that reduce bias while maintaining predictive reliability. Extensive experiments on four datasets demonstrate that the proposed method achieves state-of-the-art performance in both DG and UDA settings. Moreover, it yields a stronger balance between classification accuracy and fairness across diverse domains and sensitive subgroups. By incorporating unlabeled target-domain data, our framework extends prior fairness-aware approaches that were limited to DG and provides new insight into fairness-aware learning under unsupervised adaptation. Overall, this work offers a practical step toward scalable and robust fairness-aware learning in multi-domain environments.

Summary

Keywords

domain adaptation (DA), domain generalization (DG), Fairness-aware machine learning, machine learning, unsupervised domain adaptation (UDA)

Received

10 December 2025

Accepted

02 April 2026

Copyright

© 2026 Jiang, Zhao, Wang, Wu, Khan, Grant and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kai Jiang; Chen Zhao; Haoliang Wang; Xintao Wu; Latifur Khan; Christan Grant; Feng Chen

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Share article

Article metrics