Skip to content

ojimenezn/ml-interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Machine Learning Interpretability with LIME and SHAP

Project done as part of the ESMA4016 Machine Learning and Data Mining course at UPRM in the Spring 2020 Semester. Main focus was on studying LIME and SHAP interpretability methods to analyze the predictions of classification and regression models. Project is available as Google Colab notebooks to make installation of software dependencies and code usage easier, as well as enabling use of GPUs for Convolutional Neural Network (CNN) training.

What is interpretability?

logo
logo

LIME and SHAP are both methods that can be used to extract interpretability from complex (accurate) methods.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published