This is a curation of Github repos related to interpretable machine learning.
- https://github.com/jphall663/awesome-machine-learning-interpretability
- https://github.com/h2oai/mli-resources
- https://github.com/jphall663/jsm_2018_paper
- https://github.com/lopusz/awesome-interpretable-machine-learning
- https://github.com/pbiecek/xai_resources
- https://github.com/jphall663/xai_misconceptions
- https://github.com/pbiecek/DALEX
- https://github.com/pbiecek/breakDown
- https://github.com/ModelOriented/DALEX2
- https://github.com/pbiecek/breakDown
- https://github.com/MI2DataLab/live
- https://github.com/bradleyboehmke/CinDay-RUG-IML-2018
- https://github.com/compstat-lmu/imlplots
- https://github.com/thomasp85/lime
- https://github.com/jphall663/interpretable_machine_learning_with_python
- https://github.com/jphall663/diabetes_use_case
- https://github.com/navdeep-G/interpretable-ml
- https://github.com/dmesquita/explaining_predictions_with_LIME
- https://github.com/hungry-wook/ml4knowledge
- https://github.com/slundberg/shap
- https://github.com/nyuvis/explanation_explorer
- https://github.com/nyuvis/partial_dependence
- https://github.com/marcotcr/lime
- https://github.com/CSAILVision/gandissect
- https://github.com/keiserlab/plaquebox-paper
- https://github.com/atulshanbhag/Layerwise-Relevance-Propagation
- https://github.com/gudovskiy/e2x
- https://github.com/heytitle/thesis-designing-recurrent-neural-networks-for-explainability
- https://github.com/ramprs/grad-cam
- https://github.com/roberto1648/deep-explanation-using-ai-for-explaining-the-results-of-ai