Visual Analytics and Imaging Laboratory (VAI Lab)
Computer Science Department, Stony Brook University, NY

FairPlay: A Collaborative Approach to Mitigate Bias in Datasets for Improved AI Fairness

Abstract: The issue of fairness in decision-making is a critical one, especially given the variety of stakeholder demands for differing and mutually incompatible versions of fairness. Adopting a strategic interaction of perspectives provides an alternative to enforcing a singular standard of fairness. We present a web-based software application, FairPlay, that enables multiple stakeholders to debias datasets collaboratively. With FairPlay, users can negotiate and arrive at a mutually acceptable outcome without a universally agreed-upon theory of fairness. In the absence of such a tool, reaching a consensus would be highly challenging due to the lack of a systematic negotiation process and the inability to modify and observe changes. We have conducted user studies that demonstrate the success of FairPlay, as users could reach a consensus within about five rounds of gameplay, illustrating the application’s potential for enhancing fairness in AI systems.

Teaser: FairPlay Game Interface:

Teaser image

The components are (a) causal network link editor, (b) edge history chart, (c) aggregate edge history chart, (d) stakeholder total loss and gain chart, (e) active stakeholder card stack, (f) aggregate attribute disparity chart, (g) attribute outcome chart, (h) stakeholder attribute priority chart.

Video: Watch it to get a quick overview on the game experience and how consensus is established, playfully:

Paper: T. Behzad, M. Singh, A. Ripa, K. Mueller, “FairPlay: A Collaborative Approach to Mitigate Bias in Datasets for Improved AI Fairness,” Proceedings of the ACM on Human-Computer Interaction (CSCW), 9(2):1-30, 2025. PDF

Funding: Partial funding provided by the SUNY System Administration under SUNY Research Seed Grant Award 23-01-RSGNSF grants and NSF NRT-HDR 2125295.