Skip to content

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

License

Notifications You must be signed in to change notification settings

shanky3011/AIF360

Repository files navigation

AI Fairness 360 (AIF360 v0.1.0)

Build Status

Welcome to AI Fairness 360. We hope you will use it and contribute to it to help engender trust in AI and make the world more equitable for all.

Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias (Barocas and Selbst).

The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

We have developed the package with extensibility in mind. We encourage the contribution of your metrics, explainers, and debiasing algorithms. Please join the community to get started as a contributor. Get in touch with us on Slack (invitation here)!

Supported bias mitigation algorithms

Supported fairness metrics

Setup

Installation is easiest on a Unix system running Python 3. See the additional instructions for Windows and Python 2 as appropriate.

Linux and MacOS

Installation with pip

pip install aif360

This package supports both Python 2 and 3. However, for Python 2, the BlackBoxAuditing package must be installed manually.

To run the example notebooks, install the additional requirements as follows:

pip install -r requirements.txt

Manual installation

Clone the latest version of this repository:

git clone https://github.com/IBM/AIF360

Then, navigate to the root directory of the project and run:

pip install .

Windows

Follow the same steps above as for Linux/MacOS. Then, follow the instructions to install the appropriate build of TensorFlow which is used by aif360.algorithms.inprocessing.AdversarialDebiasing. Note: aif360 requires version 1.1.0. For example,

pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl

To use aif360.algorithms.preprocessing.OptimPreproc, install cvxpy by following the instructions and be sure to install version 0.4.11, e.g.:

pip install cvxpy==0.4.11

Python 2

Some additional installation is required to use aif.algorithms.preprocessing.DisparateImpactRemover with Python 2:

git clone https://github.com/algofairness/BlackBoxAuditing

In the root directory of BlackBoxAuditing, run:

echo -n $PWD/BlackBoxAuditing/weka.jar > python2_source/BlackBoxAuditing/model_factories/weka.path
echo "include python2_source/BlackBoxAuditing/model_factories/weka.path" >> MANIFEST.in
pip install --no-deps .

This will produce a minimal installation which satisfies our requirements.

Using AIF360

Citing AIF360

Please ask in Slack channel.

About

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.6%
  • R 0.4%