Skip to content

We visualize latent embeddings and weights of different machine learning models on an unfair dataset, comparing fair generative models, debiasing models, and unfair models

Notifications You must be signed in to change notification settings

sidguptacode/Visualizing_ML_Fairness

Repository files navigation

Visualizing ML Fairness

Authors

Sid Gupta*, Tina Li*, Melissa Hu*

Abstract

A remarkable byproduct of machine learning is how it’s pushed the scientific community to define the idea of fairness, in terms of probability and logic. Such definitions are motivated by empirical results, but in the abstract they can be used in every field of science, to make them more accessible, justified, and united. In this paper, we visually interpret two algorithms that make machine learning models follow probabilistic fairness definitions. Our main contribution, though, is visualizing how these fairness definitions translate to differences in the weights, principle components, and latent representations of models. Our results show that visually, these fairness definitions bring models to put less stress on minority groups, which is the desired philosophical outcome. We hope that our work can make probabilistic fairness a more digestible concept to understand, and can encourage scientists in other fields to think about fairness in terms of data, weights, principal components, and latent representations.

More Details

View our visualizing_ml_fairness.pdf for more information.

About

We visualize latent embeddings and weights of different machine learning models on an unfair dataset, comparing fair generative models, debiasing models, and unfair models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published