Skip to content

Paper and talk from KDD 2019 XAI Workshop

Notifications You must be signed in to change notification settings

GapData/kdd_2019

 
 

Repository files navigation

kdd_2019

Materials for 2019 KDD XAI Workshop: https://xai.kdd2019.a.intuit.com/


Accepted workshop paper also on arXiv.

On the Art and Science of Explainable Machine Learning: Techniques, Recommendations, and Responsibilities

Abstract

This text discusses several popular explanatory methods that go beyond the error measurements and plots traditionally used to assess machine learning models. Some of the explanatory methods are accepted tools of the trade while others are rigorously derived and backed by long-standing theory. The methods, decision tree surrogate models, individual conditional expectation (ICE) plots, local interpretable model-agnostic explanations (LIME), partial dependence plots, and Shapley explanations, vary in terms of scope, fidelity, and suitable application domain. Along with descriptions of these methods, this text presents real-world usage recommendations supported by a use case and public, in-depth software examples for reproducibility.


Keynote Talk slides

Proposed Guidelines for the Responsible Use of Explainable Machine Learning

Abstract

Explainable artificial intelligence (XAI) enables human learning from machine learning, human appeal of automated model decisions, regulatory compliance, and white-hat hacking and security audits of ML models. XAI techniques have been implemented in numerous open source and commercial packages and XAI is also an important, mandatory, or embedded aspect of commercial predictive modeling in industries like financial services. However, like many technologies, XAI can be misused, particularly as a faulty safeguard for harmful black-boxes and for other malevolent purposes like model stealing, reconstruction of sensitive training data, and “fairwashing”. This presentation discusses a few guidelines and best practices to help practitioners avoid any unintentional misuse, identify any intentional abuse, and generally make the most of currently available XAI techniques.

About

Paper and talk from KDD 2019 XAI Workshop

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 100.0%