Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
15. Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driver genome
• Future developments
24. Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driving safety
• Future developments