Welcome to AI Fairness 360. We hope you will use it and contribute to it to help engender trust in AI and make the world more equitable for all.
Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias (Barocas and Selbst).
The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.
Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.
We have developed the package with extensibility in mind. We encourage the contribution of your metrics, explainers, and debiasing algorithms. Please join the community to get started as a contributor. Get in touch with us on Slack (invitation here)!
-
Optimized Preprocessing (Calmon et al., 2017)
-
Disparate Impact Remover (Feldman et al., 2015)
-
Equalized Odds Postprocessing (Hardt et al., 2016)
-
Reweighing (Kamiran and Calders, 2012)
-
Reject Option Classification (Kamiran et al., 2012)
-
Prejudice Remover Regularizer (Kamishima et al., 2012)
-
Calibrated Equalized Odds Postprocessing (Pleiss et al., 2017)
-
Learning Fair Representations (Zemel et al., 2013)
-
Adversarial Debiasing (Zhang et al., 2018)
-
Comprehensive set of group fairness metrics derived from selection rates and error rates
-
Comprehensive set of sample distortion metrics
-
Generalized Entropy Index (Speicher et al., 2018)
Installation is easiest on a Unix system running Python 3. See the additional instructions for Windows and Python 2 as appropriate.
pip install aif360
This package supports both Python 2 and 3. However, for Python 2, the BlackBoxAuditing
package must be installed manually.
To run the example notebooks, install the additional requirements as follows:
pip install -r requirements.txt
Clone the latest version of this repository:
git clone https://github.com/IBM/AIF360
Then, navigate to the root directory of the project and run:
pip install .
Follow the same steps above as for Linux/MacOS. Then, follow the instructions to install the appropriate build of TensorFlow which is used by aif360.algorithms.inprocessing.AdversarialDebiasing
. Note: aif360
requires version 1.1.0. For example,
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl
To use aif360.algorithms.preprocessing.OptimPreproc
, install cvxpy
by following the instructions and be sure to install version 0.4.11, e.g.:
pip install cvxpy==0.4.11
Some additional installation is required to use aif.algorithms.preprocessing.DisparateImpactRemover
with Python 2:
git clone https://github.com/algofairness/BlackBoxAuditing
In the root directory of BlackBoxAuditing
, run:
echo -n $PWD/BlackBoxAuditing/weka.jar > python2_source/BlackBoxAuditing/model_factories/weka.path
echo "include python2_source/BlackBoxAuditing/model_factories/weka.path" >> MANIFEST.in
pip install --no-deps .
This will produce a minimal installation which satisfies our requirements.
Please ask in Slack channel.