SlideShare una empresa de Scribd logo
1 de 28
Descargar para leer sin conexión
Achieving algorithmic
transparency with Shapley
Additive Explanations
Contents:
• Trade-off between Explainability & Performance
• Shapley additive explanations
• SHAP: illustrative example
• XAI use case 1: clinical operations
• XAI use case 2: driver genome
• Future developments
3All content copyright © 2017 QuantumBlack, a McKinsey company
Explainability vs Performance trade-off
Performance Explainability Natural trade-off between these
two concepts
• Provide highly accurate solutions
for our clients
• Local interpretations of black-box
models
Neural networks
Random forests
Linear regression
Logistic regression
• Explain why our models perform
as they do. How can we interpret
the contribution of drivers in
black-box models?
• How can we get drivers for an
individual prediction?
4All content copyright © 2017 QuantumBlack, a McKinsey company
Explainability vs Performance trade-off integration
Performance Explainability Natural trade-off between these
two concepts
• Provide highly accurate solutions
for our clients
• Explain why our models perform
as they do. How can we interpret
the contribution of drivers in
black-box models?
• How can we get drivers for an
individual prediction?
• Local interpretations of black-box
models
Neural networks
Random forests
Linear regression
Logistic regression
?
5All content copyright © 2017 QuantumBlack, a McKinsey company
Methods for XAI
LIME
Figure 3: Toy example to present intuition for LIME.
The black-box model’s complex decision function f
(unknown to LIME) is represented by the blue/pink
background, which cannot be approximated well by
a linear model. The bold red cross is the instance
being explained. LIME samples instances, gets pre-
dictions using f, and weighs them by the proximity
to the instance being explained (represented here
Algorithm 1 Sparse Linear Explanations using LIME
Require: Classifier f, Number of samples N
Require: Instance x, and its interpretable version x0
Require: Similarity kernel ⇡x, Length of explanation K
Z {}
for i 2 {1, 2, 3, ..., N} do
z0
i sample around(x0
)
Z Z [ hz0
i, f(zi), ⇡x(zi)i
end for
w K-Lasso(Z, K) . with z0
i as features, f(z) as target
return w
the explanation on Z, and present this information to the
user. This estimate of faithfulness can also be used for
selecting an appropriate family of explanations from a set of
multiple interpretable model classes, thus adapting to the
given dataset and the classifier. We leave such exploration
1. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938
2. Lei et al., Rationalizing Neural Predictions, https://arxiv.org/abs/1602.04938
3. Letham et al., Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, https://arxiv.org/abs/1511.01644
4. Lundberg and Lee, A unified approach to interpreting model predictions, http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions
5. Ribeiro et al., Anchors: High-Precision Model-Agnostic Explanations, https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
6. All images taken from respective publications/github repos
(Locally Interpretable Model-
agnostic Explanations)1
6All content copyright © 2017 QuantumBlack, a McKinsey company
Methods for XAI
LIME
Figure 3: Toy example to present intuition for LIME.
The black-box model’s complex decision function f
(unknown to LIME) is represented by the blue/pink
background, which cannot be approximated well by
a linear model. The bold red cross is the instance
being explained. LIME samples instances, gets pre-
dictions using f, and weighs them by the proximity
to the instance being explained (represented here
Algorithm 1 Sparse Linear Explanations using LIME
Require: Classifier f, Number of samples N
Require: Instance x, and its interpretable version x0
Require: Similarity kernel ⇡x, Length of explanation K
Z {}
for i 2 {1, 2, 3, ..., N} do
z0
i sample around(x0
)
Z Z [ hz0
i, f(zi), ⇡x(zi)i
end for
w K-Lasso(Z, K) . with z0
i as features, f(z) as target
return w
the explanation on Z, and present this information to the
user. This estimate of faithfulness can also be used for
selecting an appropriate family of explanations from a set of
multiple interpretable model classes, thus adapting to the
given dataset and the classifier. We leave such exploration
(Locally Interpretable Model-
agnostic Explanations)1
• Trade-off between Explainability & Performance
• Shapley additive explanations
• SHAP: illustrative example
• XAI use case 1: clinical operations
• XAI use case 2: driver genome
• Future developments
8All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP (Shapley Additive Explanations)
3 properties
Local accuracy: require the output of the local
explanation model 𝑔 𝑥# 	to match the original model 𝑓 𝑥 	,
where 𝑥#
is the simplified input, i.e:
𝑓 𝑥 = 𝑔 𝑥#
Missingness: require features missing in the original input
to have no attributed impact, i.e.
𝑥&
#
= 0 ⟹ 	𝜑& = 0
Consistency: stipulates 𝜑& 𝑓#
, 𝑥 ≥ 𝜑&(𝑓, 𝑥) for any two
models 𝑓 and 𝑓# if the feature’s contribution in 𝑓# ≥ the
feature’s contribution in 𝑓.
1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874
2. Lloyd S Shapley (1953) “A value for n-person games”, In: Contributions to the Theory of Games, 2:28, pp.307-317
Explanation model1:
𝑔 𝑥# = 𝜑/ +	∑ 𝜑& 𝑥&
#2
&34
(class of additive feature attribution models)
Shapley regression values2, 𝜑& ∈ 	ℝ
- unified measure of additive feature attributions
𝜑& =	7
𝑆 ! 𝑀 − 𝑆 − 1 !
𝑀!
[𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ]
>∈C{&}
9All content copyright © 2017 QuantumBlack, a McKinsey company
Computing SHAP values
SHAP values - unified measure of additive feature
attributions, 𝜑& ∈ 	ℝ:
𝜑& =	7
𝑆 ! 𝑀 − 𝑆 − 1 !
𝑀!
[𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ]
>∈C{&}
where
F = {all input features}
S = {subset of input features}
M = |F| = number of input features
output with
𝑖	th feature
output without
𝑖th feature
Computing SHAP values:
• 𝑓>∪ & 	is trained with the 𝑖th feature
present
• 𝑓> is trained without the 𝑖th feature
• compute difference 𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@
for the current input
• retrain the model on all feature subsets
𝑆 ∈ 𝐹{𝑖}
• take weighted average of all possible
differences
weighted average of all
possible subsets of S in F
10All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP in practice
XAI
1
2
Implementations:
Kernel SHAP1 = LIME + Shapley Values
where loss function 𝐿, weighting kernel 𝜋, and regularisation term Ω are computed
so that LIME meets Shapley properties:
1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874
2. Lundberg, Erion, Lee (2018) “Consistent Individualized Feature Attribution for Tree Ensembles”, https://arxiv.org/abs/1802.03888
pip install shap
+ beautiful out-of-box JS visualisations
Integration
---------------------------------------------------------------------
Deep SHAP 1
= DeepLIFT + Shapley Values
Linear SHAP1
, Low-Order SHAP1
, Max SHAP1
Tree SHAP2
• speed-up from O(TL2M) to O(TLD2)
11All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: illustrative example
Task: given demographic data, classify adult income groups
Dataset: US 1994 Census data1
Base model: sklearn Random Forest classifier
Explanation model: TreeSHAP
1. 1994 Census database, donated by B. Becker: https://archive.ics.uci.edu/ml/datasets/Adult
12All content copyright © 2017 QuantumBlack, a McKinsey company
‘Native’ RF feature importance scores SHAP summary plot
RF importance scores vs. SHAP’s explanations
0.002
0.002
0.008
0.017
0.022
0.065
0.076
0.081
0.138
0.283
0.307
0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350
Country
Race
Workclass
Occupation
Sex
Age
Hours per week
Capital Loss
Education-Num
Capital Gain
Marital Status
13All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: individualised explanations
Typical example
Legend to categorical value:
Sex: 0 = F, 1 = M | Marital status: 2 = never married, 3 = married, spouse absent
Occupation: 0 = Adm-clerical, 5 = Sales
Low probability of >$50k earnings
High probability of >$50k earnings
14All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: individualised explanations across the cohort
NetShapleyvalue
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driver genome
• Future developments
16All content copyright © 2017 QuantumBlack, a McKinsey company
Use case: Clinical Operations
Task: given a new drug trial, predict which hospitals will enroll more patients (high enrolment rate).
Hospital A Hospital B
17All content copyright © 2017 QuantumBlack, a McKinsey company
Two questions raised frequently by our client
How can we interpret the predictions given by your
black-box model? What drives the direction (high or
low) of the predicted enrollment rate?
How can we get personalised explanations? Why
hospital B enrolls more patients than hospital A?
18All content copyright © 2017 QuantumBlack, a McKinsey company
Local explanations (Enrolment Rate)
Local prediction = 4.96 pt/month
Black-box prediction = 4.99 pt/month
Local prediction = 2.07 pt/month
Black-box prediction = 2.19 pt/month
Hospital A Hospital B
number of trials in city
prior PI experience
feature E
feature F
feature G
number of trials in city
feature B
feature C
feature D
prior activation time
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• Use case 1: clinical operations
• Use case 2: driving safety
• Future developments
20All content copyright © 2017 QuantumBlack, a McKinsey company
Use case: Driving Safety
Journey A Journey B
21All content copyright © 2017 QuantumBlack, a McKinsey company
Two questions raised frequently by our client
How can we interpret the predictions given by your black-
box model? What drives the direction (high or low) of the
predicted probability of an accident?
Drivers can use personalised explanations to improve
their driving behaviour. Also, in case of a change in their
insurance premium, they have the right to know what is
the cause
22All content copyright © 2017 QuantumBlack, a McKinsey company
The proposed solution uses Logistic Regression and Random Forest plus
LIME for interpretability
Feature engineering
~1500 features ~100 features
Creation of single and
multi-dimensional
features based on
hypotheses tree
Logistic regression
Use of elastic net
regularization to prevent
overfitting to the training
set
Random Forest
A random forest can
discover complex non-
linear relationships
between features
LIME/SHAP
Personalized
interpretations
of the RF results
SOURCE: Team analysis
23All content copyright © 2017 QuantumBlack, a McKinsey company
Example of a driver profile
rest time
driving experience
duration of driving per month
feature D
feature E
feature F
feature G
feature H
feature I
feature J
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driving safety
• Future developments
25All content copyright © 2017 QuantumBlack, a McKinsey company
Model agnostic: LIME, Kernel SHAP <- drive development & community interest
Model-specific: DeepLIFT (NNs), TreeSHAP (XGBoost, RFs) <- drive implementation
Future developments
SOURCE: Team analysis
26All content copyright © 2017 QuantumBlack, a McKinsey company
Individualised explanations ≠ Transparency
Sneeze
weight
headache no
fatigue age
Flu Sneeze
Headache
No fatigue
Explainer
(LIME)
Model Data and
prediction
Explanation Human makes
decision
Explain one
instance using
explanation
model
Pick a number
of representable
examples from
a dataset Model Human makes
decision
Dataset and
prediction
Pick step Explanations
27All content copyright © 2017 QuantumBlack, a McKinsey company
Explanation model ≠ causality
Age
Feature 4
Count of
complaints
Number of
meetings
Feature 8
Count of
Customers
pm
Deals
closed
Revenue
Feature 11
Feature 1
Feature 7
Feature 5
Customer
Tenure
Sales
meetings
for
product A
Number
of Emails
Feature 10
Feature 2
Feature 9
Performance
Rating
Feature 6
SOURCE: Team analysis
28All content copyright © 2017 QuantumBlack, a McKinsey company
Questions?

Más contenido relacionado

La actualidad más candente

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learningSri Ambati
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
 
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAI
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAIGenerative Adversarial Networks (GANs) - Ian Goodfellow, OpenAI
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAIWithTheBest
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretabilityinovex GmbH
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessManojit Nandi
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine LearningSri Ambati
 
Winning Data Science Competitions
Winning Data Science CompetitionsWinning Data Science Competitions
Winning Data Science CompetitionsJeong-Yoon Lee
 
A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)
 A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs) A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)
A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)Thomas da Silva Paula
 
Exploratory data analysis
Exploratory data analysisExploratory data analysis
Exploratory data analysisVishwas N
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Hayim Makabee
 
Yoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and WhitherYoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and WhitherMLReview
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Databricks
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
 
Towards Causal Representation Learning
Towards Causal Representation LearningTowards Causal Representation Learning
Towards Causal Representation LearningSuyeong Park
 

La actualidad más candente (20)

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex models
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learning
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
 
eScience SHAP talk
eScience SHAP talkeScience SHAP talk
eScience SHAP talk
 
Meta learning tutorial
Meta learning tutorialMeta learning tutorial
Meta learning tutorial
 
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAI
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAIGenerative Adversarial Networks (GANs) - Ian Goodfellow, OpenAI
Generative Adversarial Networks (GANs) - Ian Goodfellow, OpenAI
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretability
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairness
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Winning Data Science Competitions
Winning Data Science CompetitionsWinning Data Science Competitions
Winning Data Science Competitions
 
Intepretability / Explainable AI for Deep Neural Networks
Intepretability / Explainable AI for Deep Neural NetworksIntepretability / Explainable AI for Deep Neural Networks
Intepretability / Explainable AI for Deep Neural Networks
 
A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)
 A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs) A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)
A (Very) Gentle Introduction to Generative Adversarial Networks (a.k.a GANs)
 
Exploratory data analysis
Exploratory data analysisExploratory data analysis
Exploratory data analysis
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)
 
Yoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and WhitherYoav Goldberg: Word Embeddings What, How and Whither
Yoav Goldberg: Word Embeddings What, How and Whither
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
 
Towards Causal Representation Learning
Towards Causal Representation LearningTowards Causal Representation Learning
Towards Causal Representation Learning
 

Similar a Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup)

IEEE Datamining 2016 Title and Abstract
IEEE  Datamining 2016 Title and AbstractIEEE  Datamining 2016 Title and Abstract
IEEE Datamining 2016 Title and Abstracttsysglobalsolutions
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_JieMDO_Lab
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable MLMayur Sand
 
Reference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural NetworkReference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural NetworkSaurav Jha
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
 
A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)Rama Irsheidat
 
Recommender system
Recommender systemRecommender system
Recommender systemBhumi Patel
 
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdfBong-Ho Lee
 
SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0Vineetha Vishnu
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )SBGC
 
Final Report
Final ReportFinal Report
Final ReportAman Soni
 
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data StreamsNovel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streamsirjes
 
Declarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemTDeclarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemTLaura Chiticariu
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to RankSease
 
Data Structures problems 2006
Data Structures problems 2006Data Structures problems 2006
Data Structures problems 2006Sanjay Goel
 
Predicting Employee Attrition
Predicting Employee AttritionPredicting Employee Attrition
Predicting Employee AttritionShruti Mohan
 

Similar a Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup) (20)

IEEE Datamining 2016 Title and Abstract
IEEE  Datamining 2016 Title and AbstractIEEE  Datamining 2016 Title and Abstract
IEEE Datamining 2016 Title and Abstract
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_Jie
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
 
Reference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural NetworkReference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural Network
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
 
A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)
 
Recommender system
Recommender systemRecommender system
Recommender system
 
final
finalfinal
final
 
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdf
 
Gooey data sets
Gooey data setsGooey data sets
Gooey data sets
 
SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
 
Final Report
Final ReportFinal Report
Final Report
 
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data StreamsNovel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
 
Declarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemTDeclarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemT
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to Rank
 
Data Structures problems 2006
Data Structures problems 2006Data Structures problems 2006
Data Structures problems 2006
 
Predicting Employee Attrition
Predicting Employee AttritionPredicting Employee Attrition
Predicting Employee Attrition
 
MLProjectReport
MLProjectReportMLProjectReport
MLProjectReport
 

Más de Sri Ambati

H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Generative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptxGenerative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptxSri Ambati
 
AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek Sri Ambati
 
LLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5thLLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5thSri Ambati
 
Building, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for ProductionBuilding, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for ProductionSri Ambati
 
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...Sri Ambati
 
Risk Management for LLMs
Risk Management for LLMsRisk Management for LLMs
Risk Management for LLMsSri Ambati
 
Open-Source AI: Community is the Way
Open-Source AI: Community is the WayOpen-Source AI: Community is the Way
Open-Source AI: Community is the WaySri Ambati
 
Building Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2OBuilding Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2OSri Ambati
 
Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical Sri Ambati
 
Cutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM PapersCutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM PapersSri Ambati
 
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...Sri Ambati
 
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...Sri Ambati
 
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...Sri Ambati
 
LLM Interpretability
LLM Interpretability LLM Interpretability
LLM Interpretability Sri Ambati
 
Never Reply to an Email Again
Never Reply to an Email AgainNever Reply to an Email Again
Never Reply to an Email AgainSri Ambati
 
Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)Sri Ambati
 
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...Sri Ambati
 
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...Sri Ambati
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneySri Ambati
 

Más de Sri Ambati (20)

H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Generative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptxGenerative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptx
 
AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek
 
LLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5thLLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5th
 
Building, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for ProductionBuilding, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for Production
 
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
 
Risk Management for LLMs
Risk Management for LLMsRisk Management for LLMs
Risk Management for LLMs
 
Open-Source AI: Community is the Way
Open-Source AI: Community is the WayOpen-Source AI: Community is the Way
Open-Source AI: Community is the Way
 
Building Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2OBuilding Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2O
 
Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical
 
Cutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM PapersCutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM Papers
 
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
 
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
 
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
 
LLM Interpretability
LLM Interpretability LLM Interpretability
LLM Interpretability
 
Never Reply to an Email Again
Never Reply to an Email AgainNever Reply to an Email Again
Never Reply to an Email Again
 
Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)
 
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
 
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation Journey
 

Último

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 

Último (20)

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 

Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup)

  • 1. Achieving algorithmic transparency with Shapley Additive Explanations
  • 2. Contents: • Trade-off between Explainability & Performance • Shapley additive explanations • SHAP: illustrative example • XAI use case 1: clinical operations • XAI use case 2: driver genome • Future developments
  • 3. 3All content copyright © 2017 QuantumBlack, a McKinsey company Explainability vs Performance trade-off Performance Explainability Natural trade-off between these two concepts • Provide highly accurate solutions for our clients • Local interpretations of black-box models Neural networks Random forests Linear regression Logistic regression • Explain why our models perform as they do. How can we interpret the contribution of drivers in black-box models? • How can we get drivers for an individual prediction?
  • 4. 4All content copyright © 2017 QuantumBlack, a McKinsey company Explainability vs Performance trade-off integration Performance Explainability Natural trade-off between these two concepts • Provide highly accurate solutions for our clients • Explain why our models perform as they do. How can we interpret the contribution of drivers in black-box models? • How can we get drivers for an individual prediction? • Local interpretations of black-box models Neural networks Random forests Linear regression Logistic regression ?
  • 5. 5All content copyright © 2017 QuantumBlack, a McKinsey company Methods for XAI LIME Figure 3: Toy example to present intuition for LIME. The black-box model’s complex decision function f (unknown to LIME) is represented by the blue/pink background, which cannot be approximated well by a linear model. The bold red cross is the instance being explained. LIME samples instances, gets pre- dictions using f, and weighs them by the proximity to the instance being explained (represented here Algorithm 1 Sparse Linear Explanations using LIME Require: Classifier f, Number of samples N Require: Instance x, and its interpretable version x0 Require: Similarity kernel ⇡x, Length of explanation K Z {} for i 2 {1, 2, 3, ..., N} do z0 i sample around(x0 ) Z Z [ hz0 i, f(zi), ⇡x(zi)i end for w K-Lasso(Z, K) . with z0 i as features, f(z) as target return w the explanation on Z, and present this information to the user. This estimate of faithfulness can also be used for selecting an appropriate family of explanations from a set of multiple interpretable model classes, thus adapting to the given dataset and the classifier. We leave such exploration 1. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938 2. Lei et al., Rationalizing Neural Predictions, https://arxiv.org/abs/1602.04938 3. Letham et al., Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, https://arxiv.org/abs/1511.01644 4. Lundberg and Lee, A unified approach to interpreting model predictions, http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions 5. Ribeiro et al., Anchors: High-Precision Model-Agnostic Explanations, https://homes.cs.washington.edu/~marcotcr/aaai18.pdf 6. All images taken from respective publications/github repos (Locally Interpretable Model- agnostic Explanations)1
  • 6. 6All content copyright © 2017 QuantumBlack, a McKinsey company Methods for XAI LIME Figure 3: Toy example to present intuition for LIME. The black-box model’s complex decision function f (unknown to LIME) is represented by the blue/pink background, which cannot be approximated well by a linear model. The bold red cross is the instance being explained. LIME samples instances, gets pre- dictions using f, and weighs them by the proximity to the instance being explained (represented here Algorithm 1 Sparse Linear Explanations using LIME Require: Classifier f, Number of samples N Require: Instance x, and its interpretable version x0 Require: Similarity kernel ⇡x, Length of explanation K Z {} for i 2 {1, 2, 3, ..., N} do z0 i sample around(x0 ) Z Z [ hz0 i, f(zi), ⇡x(zi)i end for w K-Lasso(Z, K) . with z0 i as features, f(z) as target return w the explanation on Z, and present this information to the user. This estimate of faithfulness can also be used for selecting an appropriate family of explanations from a set of multiple interpretable model classes, thus adapting to the given dataset and the classifier. We leave such exploration (Locally Interpretable Model- agnostic Explanations)1
  • 7. • Trade-off between Explainability & Performance • Shapley additive explanations • SHAP: illustrative example • XAI use case 1: clinical operations • XAI use case 2: driver genome • Future developments
  • 8. 8All content copyright © 2017 QuantumBlack, a McKinsey company SHAP (Shapley Additive Explanations) 3 properties Local accuracy: require the output of the local explanation model 𝑔 𝑥# to match the original model 𝑓 𝑥 , where 𝑥# is the simplified input, i.e: 𝑓 𝑥 = 𝑔 𝑥# Missingness: require features missing in the original input to have no attributed impact, i.e. 𝑥& # = 0 ⟹ 𝜑& = 0 Consistency: stipulates 𝜑& 𝑓# , 𝑥 ≥ 𝜑&(𝑓, 𝑥) for any two models 𝑓 and 𝑓# if the feature’s contribution in 𝑓# ≥ the feature’s contribution in 𝑓. 1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874 2. Lloyd S Shapley (1953) “A value for n-person games”, In: Contributions to the Theory of Games, 2:28, pp.307-317 Explanation model1: 𝑔 𝑥# = 𝜑/ + ∑ 𝜑& 𝑥& #2 &34 (class of additive feature attribution models) Shapley regression values2, 𝜑& ∈ ℝ - unified measure of additive feature attributions 𝜑& = 7 𝑆 ! 𝑀 − 𝑆 − 1 ! 𝑀! [𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ] >∈C{&}
  • 9. 9All content copyright © 2017 QuantumBlack, a McKinsey company Computing SHAP values SHAP values - unified measure of additive feature attributions, 𝜑& ∈ ℝ: 𝜑& = 7 𝑆 ! 𝑀 − 𝑆 − 1 ! 𝑀! [𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ] >∈C{&} where F = {all input features} S = {subset of input features} M = |F| = number of input features output with 𝑖 th feature output without 𝑖th feature Computing SHAP values: • 𝑓>∪ & is trained with the 𝑖th feature present • 𝑓> is trained without the 𝑖th feature • compute difference 𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ for the current input • retrain the model on all feature subsets 𝑆 ∈ 𝐹{𝑖} • take weighted average of all possible differences weighted average of all possible subsets of S in F
  • 10. 10All content copyright © 2017 QuantumBlack, a McKinsey company SHAP in practice XAI 1 2 Implementations: Kernel SHAP1 = LIME + Shapley Values where loss function 𝐿, weighting kernel 𝜋, and regularisation term Ω are computed so that LIME meets Shapley properties: 1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874 2. Lundberg, Erion, Lee (2018) “Consistent Individualized Feature Attribution for Tree Ensembles”, https://arxiv.org/abs/1802.03888 pip install shap + beautiful out-of-box JS visualisations Integration --------------------------------------------------------------------- Deep SHAP 1 = DeepLIFT + Shapley Values Linear SHAP1 , Low-Order SHAP1 , Max SHAP1 Tree SHAP2 • speed-up from O(TL2M) to O(TLD2)
  • 11. 11All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: illustrative example Task: given demographic data, classify adult income groups Dataset: US 1994 Census data1 Base model: sklearn Random Forest classifier Explanation model: TreeSHAP 1. 1994 Census database, donated by B. Becker: https://archive.ics.uci.edu/ml/datasets/Adult
  • 12. 12All content copyright © 2017 QuantumBlack, a McKinsey company ‘Native’ RF feature importance scores SHAP summary plot RF importance scores vs. SHAP’s explanations 0.002 0.002 0.008 0.017 0.022 0.065 0.076 0.081 0.138 0.283 0.307 0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350 Country Race Workclass Occupation Sex Age Hours per week Capital Loss Education-Num Capital Gain Marital Status
  • 13. 13All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: individualised explanations Typical example Legend to categorical value: Sex: 0 = F, 1 = M | Marital status: 2 = never married, 3 = married, spouse absent Occupation: 0 = Adm-clerical, 5 = Sales Low probability of >$50k earnings High probability of >$50k earnings
  • 14. 14All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: individualised explanations across the cohort NetShapleyvalue
  • 15. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • SHAP: illustrative example • Use case 1: clinical operations • Use case 2: driver genome • Future developments
  • 16. 16All content copyright © 2017 QuantumBlack, a McKinsey company Use case: Clinical Operations Task: given a new drug trial, predict which hospitals will enroll more patients (high enrolment rate). Hospital A Hospital B
  • 17. 17All content copyright © 2017 QuantumBlack, a McKinsey company Two questions raised frequently by our client How can we interpret the predictions given by your black-box model? What drives the direction (high or low) of the predicted enrollment rate? How can we get personalised explanations? Why hospital B enrolls more patients than hospital A?
  • 18. 18All content copyright © 2017 QuantumBlack, a McKinsey company Local explanations (Enrolment Rate) Local prediction = 4.96 pt/month Black-box prediction = 4.99 pt/month Local prediction = 2.07 pt/month Black-box prediction = 2.19 pt/month Hospital A Hospital B number of trials in city prior PI experience feature E feature F feature G number of trials in city feature B feature C feature D prior activation time
  • 19. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • Use case 1: clinical operations • Use case 2: driving safety • Future developments
  • 20. 20All content copyright © 2017 QuantumBlack, a McKinsey company Use case: Driving Safety Journey A Journey B
  • 21. 21All content copyright © 2017 QuantumBlack, a McKinsey company Two questions raised frequently by our client How can we interpret the predictions given by your black- box model? What drives the direction (high or low) of the predicted probability of an accident? Drivers can use personalised explanations to improve their driving behaviour. Also, in case of a change in their insurance premium, they have the right to know what is the cause
  • 22. 22All content copyright © 2017 QuantumBlack, a McKinsey company The proposed solution uses Logistic Regression and Random Forest plus LIME for interpretability Feature engineering ~1500 features ~100 features Creation of single and multi-dimensional features based on hypotheses tree Logistic regression Use of elastic net regularization to prevent overfitting to the training set Random Forest A random forest can discover complex non- linear relationships between features LIME/SHAP Personalized interpretations of the RF results SOURCE: Team analysis
  • 23. 23All content copyright © 2017 QuantumBlack, a McKinsey company Example of a driver profile rest time driving experience duration of driving per month feature D feature E feature F feature G feature H feature I feature J
  • 24. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • SHAP: illustrative example • Use case 1: clinical operations • Use case 2: driving safety • Future developments
  • 25. 25All content copyright © 2017 QuantumBlack, a McKinsey company Model agnostic: LIME, Kernel SHAP <- drive development & community interest Model-specific: DeepLIFT (NNs), TreeSHAP (XGBoost, RFs) <- drive implementation Future developments SOURCE: Team analysis
  • 26. 26All content copyright © 2017 QuantumBlack, a McKinsey company Individualised explanations ≠ Transparency Sneeze weight headache no fatigue age Flu Sneeze Headache No fatigue Explainer (LIME) Model Data and prediction Explanation Human makes decision Explain one instance using explanation model Pick a number of representable examples from a dataset Model Human makes decision Dataset and prediction Pick step Explanations
  • 27. 27All content copyright © 2017 QuantumBlack, a McKinsey company Explanation model ≠ causality Age Feature 4 Count of complaints Number of meetings Feature 8 Count of Customers pm Deals closed Revenue Feature 11 Feature 1 Feature 7 Feature 5 Customer Tenure Sales meetings for product A Number of Emails Feature 10 Feature 2 Feature 9 Performance Rating Feature 6 SOURCE: Team analysis
  • 28. 28All content copyright © 2017 QuantumBlack, a McKinsey company Questions?