Welcome to AI Fairness 360. We hope you will use it and contribute to it to help engender trust in AI and make the world more equitable for all.
Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias (Barocas and Selbst).
The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.
Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.
We have developed the package with extensibility in mind. We encourage the contribution of your metrics, explainers, and debiasing algorithms. Please join the community to get started as a contributor. Get in touch with us on Slack (invitation here)!
-
Flavio P. Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney, “Optimized Pre-Processing for Discrimination Prevention,” Conference on Neural Information Processing Systems, 2017.
-
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian, “Certifying and Removing Disparate Impact,” ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015.
-
Moritz Hardt, Eric Price, and Nathan Srebro, “Equality of Opportunity in Supervised Learning,” Conference on Neural Information Processing Systems, 2016.
-
Faisal Kamiran and Toon Calders, “Data Preprocessing Techniques for Classification without Discrimination,” Knowledge and Information Systems, 2012.
-
Faisal Kamiran, Asim Karim, and Xiangliang Zhang, “Decision Theory for Discrimination-Aware Classification,” IEEE International Conference on Data Mining, 2012.
-
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma, “Fairness-Aware Classifier with Prejudice Remover Regularizer,” Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2012.
-
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger, “On Fairness and Calibration,” Conference on Neural Information Processing Systems, 2017.
-
Richard Zemel, Yu (Ledell) Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork, “Learning Fair Representations,” International Conference on Machine Learning, 2013.
-
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell, “Mitigating Unwanted Biases with Adversarial Learning,” AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2018.
-
Comprehensive set of group fairness metrics derived from selection rates and error rates
-
Comprehensive set of sample distortion metrics
-
Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar, “A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices,” ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018.
Installation is easiest on a Unix system running Python 3. See the additional instructions for Windows and Python 2 as appropriate.
pip install aif360
This package supports both Python 2 and 3. However, for Python 2, the BlackBoxAuditing
package must be installed manually.
To run the example notebooks, install the additional requirements as follows:
pip install -r requirements.txt
Clone the latest version of this repository:
git clone https://github.com/IBM/AIF360
Then, navigate to the root directory of the project and run:
pip install .
Follow the same steps above as for Linux/MacOS. Then, follow the instructions to install the appropriate build of TensorFlow which is used by aif360.algorithms.inprocessing.AdversarialDebiasing
. Note: aif360
requires version 1.1.0. For example,
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl
To use aif360.algorithms.preprocessing.OptimPreproc
, install cvxpy
by following the instructions and be sure to install version 0.4.11, e.g.:
pip install cvxpy==0.4.11
Some additional installation is required to use aif.algorithms.preprocessing.DisparateImpactRemover
with Python 2:
git clone https://github.com/algofairness/BlackBoxAuditing
In the root directory of BlackBoxAuditing
, run:
echo -n $PWD/BlackBoxAuditing/weka.jar > python2_source/BlackBoxAuditing/model_factories/weka.path
echo "include python2_source/BlackBoxAuditing/model_factories/weka.path" >> MANIFEST.in
pip install --no-deps .
This will produce a minimal installation which satisfies our requirements.
Please ask in Slack channel.