SlideShare une entreprise Scribd logo
1  sur  248
Télécharger pour lire hors ligne
Explainable AI in Industry
AAAI 2020 Tutorial
Freddy Lecue, Krishna Gade, Sahin Cem Geyik,
Krishnaram Kenthapadi, Luke Merrick, Varun Mithal,
Ankur Taly, Riccardo Guidotti, Pasquale Minervini
https://xaitutorial2020.github.io 1
Outline
2
Agenda
● Part I: Introduction and Motivation
○ Motivation, Definitions, Properties, Evaluation
○ Challenges for Explainable AI @ Scale
● Part II: Explanation in AI (not only Machine Learning!)
○ From Machine Learning to Knowledge Representation and Reasoning and Beyond
● Part III: Explainable Machine Learning (from a Machine Learning Perspective)
● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective)
● Part V: Case Studies from Industry
○ Applications, Lessons Learned, and Research Challenges
3
Scope
4
AI Adoption: Requirements
Trustable
AI
Valid
AI
Responsible
AI
Privacy-
preserving
AI
Explainable
AI
• Human
Interpretable AI
• Machine
Interpretable AI
What is
the
rational?
Introduction and Motivation
6
Explanation - From a Business Perspective
7
Business to Customer AI
Critical Systems (1)
Critical Systems (2)
COMPAS recidivism black bias
… but not only Critical Systems (1)
community.fico.com/s/explainable-machine-learning-
challenge
https://www.ft.com/content/e07cee0c-3949-11e7-821a-6027b8a20f23
▌Finance:
• Credit scoring, loan approval
• Insurance quotes
… but not only Critical Systems (2)
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, Noemie Elhadad: Intelligible Models
for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. KDD 2015: 1721-1730
Patricia Hannon ,https://med.stanford.edu/news/all-news/2018/03/researchers-say-use-of-ai-in-medicine-
raises-ethical-questions.html
▌Healthcare
• Applying ML methods in medical care
is problematic.
• AI as 3rd-party actor in physician-
patient relationship
• Responsibility, confidentiality?
• Learning must be done with available
data.
Cannot randomize cares given to
patients!
• Must validate models before use.
… but not only Critical Systems (3)
Black-box AI creates business risk for Industry
Internal Audit, Regulators
IT & Operations
Data Scientists
Business Owner
Can I trust our AI
decisions?
Are these AI system
decisions fair?
Customer Support
How do I answer this
customer complaint?
How do I monitor and
debug this model?
Is this the best model
that can be built?
Black-box
AI
Why I am getting this
decision?
How can I get a better
decision?
Poor Decision
Black-box AI creates confusion and doubt
Explanation - From a Model Perspective
16
Why Explainability: Debug (Mis-)Predictions
17
Top label: “clog”
Why did the network label
this image as “clog”?
18
Why Explainability: Improve ML Model
Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
Why Explainability: Verify the ML Model / System
19
Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
20
Why Explainability: Learn New Insights
Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
21
Why Explainability: Learn Insights in the Sciences
Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
Explanation - From a Regulatory Perspective
22
Immigration Reform and Control Act
Citizenship
Rehabilitation Act of 1973;
Americans with Disabilities Act
of 1990
Disability status
Civil Rights Act of 1964
Race
Age Discrimination in Employment Act
of 1967
Age
Equal Pay Act of 1963;
Civil Rights Act of 1964
Sex
And more...
Why Explainability: Laws against Discrimination
23
Fairness Privacy
Transparenc
y
Explainability
24
GDPR Concerns Around Lack of Explainability in AI
“
Companies should commit to ensuring
systems that could fall under GDPR, including
AI, will be compliant. The threat of sizeable
fines of €20 million or 4% of global
turnover provides a sharp incentive.
Article 22 of GDPR empowers individuals with
the right to demand an explanation of how
an AI system made a decision that affects
them.
”
- European Commision
VP, European Commision
Fairness Privacy
Transparenc
y
Explainability
26
Fairness Privacy
Transparenc
y
Explainability
27
Why Explainability: Growing Global AI Regulation
● GDPR: Article 22 empowers individuals with the right to demand an explanation of how an
automated system made a decision that affects them.
● Algorithmic Accountability Act 2019: Requires companies to provide an assessment of the risks posed by
the automated decision system to the privacy or security and the risks that contribute to inaccurate,
unfair, biased, or discriminatory decisions impacting consumers
● California Consumer Privacy Act: Requires companies to rethink their approach to capturing,
storing, and sharing personal data to align with the new requirements by January 1, 2020.
● Washington Bill 1655: Establishes guidelines for the use of automated decision systems to
protect consumers, improve transparency, and create more market predictability.
● Massachusetts Bill H.2701: Establishes a commission on automated decision-making,
transparency, fairness, and individual rights.
● Illinois House Bill 3415: States predictive data analytics determining creditworthiness or hiring
decisions may not include information that correlates with the applicant race or zip code.
29
SR 11-7 and OCC regulations for Financial Institutions
Model Diagnostics
Root Cause Analytics
Performance monitoring
Fairness monitoring
Model Comparison
Cohort Analysis
Explainable Decisions
API Support
Model Launch Signoff
Model Release Mgmt
Model Evaluation
Compliance Testing
Model Debugging
Model Visualization
Explainable
AI
Train
QA
Predict
Deploy
A/B Test
Monitor
Debug
Feedback Loop
“Explainability by Design” for AI products
AI @ Scale - Challenges for Explainable AI
31
LinkedIn operates the largest professional
network on the Internet
645M+ members
30M+
companies
are
represented
on LinkedIn
90K+
schools listed
(high school &
college)
35K+
skills listed
20M+
open jobs
on
LinkedIn
Jobs
280B
Feed updates
33© 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved |
The AWS ML Stack
Broadest and most complete set of Machine Learning capabilities
VISION SPEECH TEXT SEARCH NEW CHATBOTS PERSONALIZATION FORECASTING FRAUD NEW DEVELOPMENT NEW CONTACT CENTERS
NEW
Amazon SageMaker Ground
Truth
Augmented
AI
SageMaker
Neo
Built-in
algorithms
SageMaker
Notebooks NEW
SageMaker
Experiments NEW
Model
tuning
SageMaker
Debugger NEW
SageMaker
Autopilot NEW
Model
hosting
SageMaker
Model Monitor NEW
Deep Learning
AMIs & Containers
GPUs &
CPUs
Elastic
Inference
Inferentia FPGA
Amazon
Rekognition
Amazon
Polly
Amazon
Transcribe
+Medical
Amazon
Comprehend
+Medical
Amazon
Translate
Amazon
Lex
Amazon
Personalize
Amazon
Forecast
Amazon
Fraud Detector
Amazon
CodeGuru
AI SERVICES
ML SERVICES
ML FRAMEWORKS & INFRASTRUCTURE
Amazon
Textract
Amazon
Kendra
Contact Lens
For Amazon
Connect
SageMaker Studio IDE NEW
NEW
NEW
NEW
NE
W
Explanation - In a Nutshell
34
What is Explainable AI?
Data
Black-Box
AI
AI
product
Confusion with Today’s AI Black
Box
● Why did you do that?
● Why did you not do that?
● When do you succeed or fail?
● How do I correct an error?
Black Box AI
Decision,
Recommendation
Clear & Transparent Predictions
● I understand why
● I understand why not
● I know why you succeed or fail
● I understand, so I trust you
Explainable AI
Data
Explainable
AI
Explainable
AI Product
Decision
Explanation
Feedback
- Humans may have follow-up questions
- Explanations cannot answer all users’ concerns
Weld, D., and Gagan Bansal. "The challenge of crafting
intelligible intelligence." Communications of ACM (2018).
Example of an End-to-End XAI System
Neural Net
CNNGAN
RNN
Ensemble
Method
Random
Forest
XGB
Statistical
Model
AOG
SVM
Graphical Model
Bayesian
Belief Net
SLR
CRF HBN
MLN
Markov
Model
Decision
Tree
Linear
Model
Non-Linear
functions
Polynomial
functions
Quasi-Linear
functions
Accuracy
Explainability
InterpretabilityLearning
• Challenges:
• Supervised
• Unsupervised learning
• Approach:
• Representation Learning
• Stochastic selection
• Output:
• Correlation
• No causation
How to Explain? Accuracy vs. Explainability
Oxford Dictionary of
English
XAI Definitions - Explanation vs. Interpretation
TextTabular
Images
On the Role of Data in XAI
KDD 2019 Tutorial on Explainable AI in Industry - https://sites.google.com/view/kdd19-explainable-ai-tutorial
Evaluation (1) - Perturbation-based Approaches
Evaluation criteria for Explanations [Miller, 2017]
○ Truth & probability
○ Usefulness, relevance
○ Coherence with prior belief
○ Generalization
Cognitive chunks = basic explanation units (for different explanation needs)
○ Which basic units for explanations?
○ How many?
○ How to compose them?
○ Uncertainty & end users?
[Doshi-Velez and Kim 2017, Poursabzi-Sangdeh
18]
Evaluation (2) - Human (Role)-based Evaluation is
Essential… but too often based on size!
Comprehensibilit
y
How much effort
for correct human
interpretation?
Succinctness
How concise and
compact is the
explanation?
Actionability
What can one
action, do with
the explanation?
Reusability
Could the
explanation be
personalized?
Accuracy
How accurate and
precise is the
explanation?
Completeness
Is the explanation
complete, partial,
restricted?
Source: Accenture Point of View. Understanding Machines: Explainable AI. Freddy Lecue, Dadong Wan
Evaluation (3) - XAI: One Objective, Many Metrics
Explanation in AI (not only Machine Learning!)
43
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
Artificial
Intelligence
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
Artificial
Intelligence
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Which complex features are
responsible of classification?
Saliency Map
MAS
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which features are responsible of
classification?
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which actions are
responsible of a plan?
Which features are responsible of
classification?
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
MAS
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Narrative-based
Which decisions, combination of
multimodal decisions lead to an action?
Uncertainty Map
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which combination of
features is optimal?
Which features are responsible of
classification?
Which actions are
responsible of a plan?
Which constraints can be relaxed?
Conflicts
Resolution
Computer
Vision
Search
KRR
NLP
Game
Theory
Robotics
UAI
Surrogate
Model
Dependency
Plot
Feature
Importance
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
Machine
Learning
Strategy
Summarization
Which complex features are
responsible of classification?
Saliency Map
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Plan Refinement
Planning
Shapely
Values
Narrative-based
Which decisions, combination of
multimodal decisions lead to an action? Which entity is responsible for
classification?
Machine Learning based
Uncertainty Map
MAS
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Which complex features are
responsible of classification?
Which actions are
responsible of a plan?
Which entity is responsible for
classification?
Which combination of
features is optimal?
Which constraints can be relaxed?
Which features are responsible of
classification?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Surrogate
Model
Dependency
Plot
Feature
Importance
Shapely
Values
Uncertainty Map
Saliency Map
Conflicts
Resolution
Abduction
Diagnosis
Plan Refinement
Strategy
Summarization
Machine Learning based
Narrative-based
Robotics
• Which axiom is responsible of
inference (e.g., classification)?
• Abduction/Diagnostic: Find the right
root causes (abduction)?
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Which decisions, combination of
multimodal decisions lead to an action?
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Uncertainty as an
alternative to
explanation
Which complex features are
responsible of classification?
Which actions are
responsible of a plan?
Which entity is responsible for
classification?
Which combination of
features is optimal?
Which constraints can be relaxed?
Which features are responsible of
classification?
Machine
Learning
Computer
Vision
Search
Planning
KRR
NLP
Game
Theory
MAS
Surrogate
Model
Dependency
Plot
Feature
Importance
Shapely
Values
Uncertainty Map
Saliency Map
Conflicts
Resolution
Abduction
Diagnosis
Plan Refinement
Strategy
Summarization
Machine Learning based
Narrative-based
Robotics
• Which axiom is responsible of
inference (e.g., classification)?
• Abduction/Diagnostic: Find the right
root causes (abduction)?
How to summarize the
reasons (motivation,
justification, understanding)
for an AI system behavior,
and explain the causes of
their decisions?
Artificial
Intelligence
• Which agent strategy & plan ?
• Which player contributes most?
• Why such a conversational flow?
Which decisions, combination of
multimodal decisions lead to an action?
UAI
XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
Feature Importance
Partial Dependence Plot
Individual Conditional Expectation
Sensitivity Analysis
Naive Bayes model
Igor Kononenko. Machine learning for medical diagnosis:
history, state of the art and perspective. Artificial Intelligence
in Medicine, 23:89–109, 2001.
Counterfactual
What-if
Brent D. Mittelstadt, Chris
Russell, Sandra Wachter:
Explaining Explanations in AI.
FAT 2019: 279-288
Rory Mc Grath, Luca Costabello,
Chan Le Van, Paul Sweeney,
Farbod Kamiab, Zhao Shen,
Freddy Lécué: Interpretable Credit
Application Predictions With
Counterfactual Explanations.
CoRR abs/1811.05245 (2018)
Interpretable Models:
• Decision Trees, Lists and
Sets,
• GAMs,
• GLMs,
• Linear regression,
• Logistic regression,
• KNNs
Overview of Explanation in Machine Learning (1)
Auto-encoder / Prototype
Oscar Li, Hao Liu, Chaofan Chen, Cynthia Rudin: Deep
Learning for Case-Based Reasoning Through Prototypes: A
Neural Network That Explains Its Predictions. AAAI 2018:
3530-3537
Surogate Model
Mark Craven, Jude W. Shavlik: Extracting Tree-Structured
Representations of Trained Networks. NIPS 1995: 24-30
Attribution for Deep
Network (Integrated gradient-based)
Mukund Sundararajan, Ankur Taly, and Qiqi Yan.
Axiomatic attribution for deep networks. In ICML,
pp. 3319–3328, 2017.
Attention Mechanism
Avanti Shrikumar, Peyton Greenside, Anshul
Kundaje: Learning Important Features Through
Propagating Activation Differences. ICML 2017:
3145-3153
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine
translation by jointly learning to align and translate.
International Conference on Learning Representations,
2015
Edward Choi, Mohammad Taha Bahadori, Jimeng Sun,
Joshua Kulas, Andy Schuetz, Walter F. Stewart: RETAIN: An
Interpretable Predictive Model for Healthcare using
Reverse Time Attention Mechanism. NIPS 2016: 3504-
3512
Chaofan Chen, Oscar Li, Alina Barnett, Jonathan Su, Cynthia
Rudin: This looks like that: deep learning for interpretable
image recognition. CoRR abs/1806.10574 (2018)
Overview of Explanation in Machine Learning (2)
●Artificial Neural Network
Uncertainty Map
Saliency Map
Alex Kendall, Yarin Gal: What Uncertainties Do We Need in Bayesian Deep Learning for
Computer Vision? NIPS 2017: 5580-5590
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, Been Kim:
Sanity Checks for Saliency Maps. NeurIPS 2018: 9525-9536
Visual Explanation
Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele,
Trevor Darrell: Generating Visual Explanations. ECCV (4) 2016: 3-19
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba:
Network Dissection: Quantifying Interpretability of Deep Visual
Representations. CVPR 2017: 3319-3327
Interpretable Units
Overview of Explanation in Machine Learning (3)
●Computer Vision
Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
Overview of Explanation in Different AI Fields (1)
●Game Theory
Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
L-Shapley and C-Shapley (with graph structure)
Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C-
Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019
Overview of Explanation in Different AI Fields (1)
●Game Theory
Shapley Additive Explanation
Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768-
4777
L-Shapley and C-Shapley (with graph structure)
Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C-
Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019
instance-wise feature
importance (causal
influence)
Erik Štrumbelj and Igor Kononenko. An efficient
explanation of individual classifications using
game theory. Journal of Machine Learning
Research, 11:1–18, 2010.
Anupam Datta, Shayak Sen, and Yair Zick.
Algorithmic transparency via quantitative input
influence: Theory and experiments with
learning systems. In Security and Privacy (SP),
2016 IEEE Symposium on, pp. 598–617. IEEE,
2016.
Overview of Explanation in Different AI Fields (1)
●Game Theory
Conflicts resolution
Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for
Over-Constrained Problems. AAAI 2007: 323-328
Robustness Computation
Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and
optimization. In ECAI (Vol. 16, p. 186).
If A+1 then NEW Conflicts
on X and Y
A
• Search and Constraint Satisfaction
Overview of Explanation in Different AI Fields (2)
Conflicts resolution
Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for
Over-Constrained Problems. AAAI 2007: 323-328
Constraints
relaxation
Ulrich Junker: QUICKXPLAIN: Preferred Explanations and
Relaxations for Over-Constrained Problems. AAAI 2004:
167-172
Robustness Computation
Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and
optimization. In ECAI (Vol. 16, p. 186).
If A+1 then NEW Conflicts
on X and Y
A
• Search and Constraint Satisfaction
Overview of Explanation in Different AI Fields (2)
Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
Abduction Reasoning (in Bayesian
Network)
David Poole: Probabilistic Horn Abduction and Bayesian
Networks. Artif. Intell. 64(1): 81-129 (1993)
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
Explaining Reasoning (through Justification) e.g., Subsumption
Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1)
1995: 816-821
Diagnosis Inference
Alban Grastien, Patrik Haslum, Sylvie Thiébaux: Conflict-Based
Diagnosis of Discrete Event Systems: Theory and Practice. KR
2012
Abduction Reasoning (in Bayesian
Network)
David Poole: Probabilistic Horn Abduction and Bayesian
Networks. Artif. Intell. 64(1): 81-129 (1993)
• Knowledge Representation and Reasoning
Overview of Explanation in Different AI Fields (3)
●Multi-agent Systems
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
●Multi-agent Systems
Agent Strategy Summarization
Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
●Multi-agent Systems
Agent Strategy Summarization
Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207
Explanation of Agent Conflicts & Harmful
Interactions
Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A.
Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents
and Multi-Agent Systems 7(1-2): 29-48 (2003)
Explainable Agents
Joost Broekens, Maaike Harbers, Koen V. Hindriks, Karel van
den Bosch, Catholijn M. Jonker, John-Jules Ch. Meyer: Do
You Get It? User-Evaluated Explainable BDI Agents. MATES
2010: 28-39
W. Lewis Johnson: Agents that Learn to
Explain Themselves. AAAI 1994: 1257-
1263
• Multi-agent Systems
Overview of Explanation in Different AI Fields (4)
Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
• NLP
Overview of Explanation in Different AI Fields (5)
LIME for NLP
Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?":
Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144
Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
• NLP
Overview of Explanation in Different AI Fields (5)
LIME for NLP
Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?":
Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144
Explainable NLP
Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative
Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018)
Fine-grained
explanations are in
the form of:
• texts in a real-
world dataset;
• Numerical scores
Hendrik Strobelt, Sebastian
Gehrmann, Michael Behrisch, Adam
Perer, Hanspeter Pfister, Alexander M.
Rush: Seq2seq-Vis: A Visual Debugging
Tool for Sequence-to-Sequence
Models. IEEE Trans. Vis. Comput.
Graph. 25(1): 353-363 (2019)
NLP Debugger
Hendrik Strobelt, Sebastian
Gehrmann, Hanspeter Pfister,
Alexander M. Rush: LSTMVis: A Tool
for Visual Analysis of Hidden State
Dynamics in Recurrent Neural
Networks. IEEE Trans. Vis. Comput.
Graph. 24(1): 667-676 (2018)
• NLP
Overview of Explanation in Different AI Fields (5)
●Planning and Scheduling
XAI Plan
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
• Planning and Scheduling
Overview of Explanation in Different AI Fields (6)
●Planning and Scheduling
XAI Plan
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
Human-in-the-loop Planning
Maria Fox, Derek Long, Daniele Magazzeni: Explainable Planning. CoRR
abs/1709.10256 (2017)
Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner
Decisions. CoRR abs/1810.06338 (2018)
(Manual) Plan Comparison
• Planning and Scheduling
Overview of Explanation in Different AI Fields (6)
Narration of Autonomous Robot Experience
Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous
robot experience. In IJCAI, pages 862–868. AAAI Press, 2016.
Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall
Symposium: Dialog with Robots, Vol. 61. 62.
• Robotics
Overview of Explanation in Different AI Fields (7)
Narration of Autonomous Robot Experience
Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous
robot experience. In IJCAI, pages 862–868. AAAI Press, 2016.
From Decision Tree to human-friendly
information
Raymond Ka-Man Sheh: "Why Did You Do That?" Explainable Intelligent
Robots. AAAI Workshops 2017
Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall
Symposium: Dialog with Robots, Vol. 61. 62.
Overview of Explanation in Different AI Fields (7)
• Robotics
Probabilistic Graphical Models
Daphne Koller, Nir Friedman: Probabilistic Graphical Models - Principles and Techniques. MIT
Press 2009, ISBN 978-0-262-01319-2, pp. I-XXXV, 1-1231
• Reasoning under Uncertainty
Overview of Explanation in Different AI Fields (8)
Explainable Machine Learning
(from a Machine Learning Perspective)
78
Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
79
Slide credit: https://twitter.com/chandan_singh96/status/1138811752769101825
Integrated
Gradients
Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
81
Top label: “clog”
Why did the network label
this image as “clog”?
82
Top label: “fireboat”
Why did the network label
this image as “fireboat”?
83
Credit Line Increase
Fair lending laws [ECOA, FCRA] require credit decisions to be explainable
Bank Credit Lending Model
Why? Why not?
How?
? Request Denied
Query AI System
Credit Lending Score = 0.3
Credit Lending in a black-box ML world
Attribute a model’s prediction on an input to features of the input
Examples:
● Attribute an object recognition network’s prediction to its pixels
● Attribute a text sentiment network’s prediction to individual words
● Attribute a lending model’s prediction to its features
A reductive formulation of “why this prediction” but surprisingly useful
The Attribution Problem
Application of Attributions
● Debugging model predictions
E.g., Attribution an image misclassification to the pixels responsible for it
● Generating an explanation for the end-user
E.g., Expose attributions for a lending prediction to the end-user
● Analyzing model robustness
E.g., Craft adversarial examples using weaknesses surfaced by attributions
● Extract rules from the model
E.g., Combine attribution to craft rules (pharmacophores) capturing prediction
logic of a drug screening network
86
Next few slides
We will cover the following attribution methods**
● Ablations
● Gradient based methods (specific to differentiable models)
● Score Backpropagation based methods (specific to NNs)
We will also discuss game theory (Shapley value) in attributions
**Not a complete list!
See Ancona et al. [ICML 2019], Guidotti et al. [arxiv 2018] for a comprehensive survey 87
Ablations
Drop each feature and attribute the change in prediction to that feature
Pros:
● Simple and intuitive to interpret
Cons:
● Unrealistic inputs
● Improper accounting of interactive features
● Can be computationally expensive
88
Feature*Gradient
Attribution to a feature is feature value times gradient, i.e., xi* 𝜕y/𝜕
xi
● Gradient captures sensitivity of output w.r.t. feature
● Equivalent to Feature*Coefficient for linear models
○ First-order Taylor approximation of non-linear models
● Popularized by SaliencyMaps [NIPS 2013], Baehrens et al. [JMLR 2010]
89
Gradients in the
vicinity of the input
seem like noise?
Local linear approximations can be too local
90
score
“fireboat-ness” of image
Interesting gradients
uninteresting gradients
(saturation)
1.0
0.0
Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Easy case: Output of a neuron is a linear function
of previous neurons (i.e., ni = ⅀ wij * nj)
e.g., the logit neuron
● Re-distribute the contribution in proportion to
the coefficients wij
91
Image credit heatmapping.org
Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Tricky case: Output of a neuron is a non-linear
function, e.g., ReLU, Sigmoid, etc.
● Guided BackProp: Only consider ReLUs that
are on (linear regime), and which contribute
positively
● LRP: Use first-order Taylor decomposition to
linearize activation function
● DeepLift: Distribute activation difference
relative a reference point in proportion to
edge weights 92
Image credit heatmapping.org
Score Back-Propagation based Methods
Re-distribute the prediction score through the neurons in the network
● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014]
Pros:
● Conceptually simple
● Methods have been empirically validated to
yield sensible result
Cons:
● Hard to implement, requires instrumenting
the model
● Often breaks implementation invariance
Think: F(x, y, z) = x * y *z and
G(x, y, z) = x * (y * z)
Image credit heatmapping.org
Baselines and additivity
● When we decompose the score via backpropagation, we imply a normative
alternative called a baseline
○ “Why Pr(fireboat) = 0.91 [instead of 0.00]”
● Common choice is an informationless input for the model
○ E.g., Black image for image models
○ E.g., Empty text or zero embedding vector for text models
● Additive attributions explain F(input) - F(baseline) in terms of input features
score
intensity
Interesting gradients
uninteresting gradients
(saturation)
1.0
0.0
Baseline … scaled inputs ...
… gradients of scaled inputs ….
Input
Another approach: gradients at many points
IG(input, base) ::= (input - base) * ∫0 -1▽F(𝛂*input + (1-𝛂)*base) d𝛂
Original image Integrated Gradients
Integrated Gradients [ICML 2017]
Integrate the gradients along a straight-line path from baseline to input
Integrated Gradients in action
97
Original image “Clog”
Why is this image labeled as “clog”?
Original image Integrated Gradients
(for label “clog”)
“Clog”
Why is this image labeled as “clog”?
Detecting an architecture bug
● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site
● Finding: Some atoms had identical attributions despite different connectivity
● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site
● Finding: Some atoms had identical attributions despite different connectivity
Detecting an architecture bug
● Bug: The architecture had a bug due to which the convolved bond features
did not affect the prediction!
● Deep network predicts various diseases from chest x-rays
Original image
Integrated gradients
(for top label)
Detecting a data issue
● Deep network predicts various diseases from chest x-rays
● Finding: Attributions fell on radiologist’s markings (rather than the pathology)
Original image
Integrated gradients
(for top label)
Detecting a data issue
Cooperative game theory in attributions
104
Classic result in game theory on distributing gain in a coalition game
● Coalition Games
○ Players collaborating to generate some gain (think: revenue)
○ Set function v(S) determining the gain for any subset S of players
Shapley Value [Annals of Mathematical studies,1953]
Classic result in game theory on distributing gain in a coalition game
● Coalition Games
○ Players collaborating to generate some gain (think: revenue)
○ Set function v(S) determining the gain for any subset S of players
● Shapley Values are a fair way to attribute the total gain to the players based on
their contributions
○ Concept: Marginal contribution of a player to a subset of other players (v(S U {i}) - v(S))
○ Shapley value for a player is a specific weighted aggregation of its marginal over all
possible subsets of other players
Shapley Value for player i = ⅀S⊆N w(S) * (v(S U {i}) - v(S))
(where w(S) = N! / |S|! (N - |S| -1)!)
Shapley Value [Annals of Mathematical studies, 1953]
Shapley values are unique under four simple axioms
● Dummy: If a player never contributes to the game then it must receive zero attribution
● Efficiency: Attributions must add to the total gain
● Symmetry: Symmetric players must receive equal attribution
● Linearity: Attribution for the (weighted) sum of two games must be the same as the
(weighted) sum of the attributions for each of the games
Shapley Value Justification
SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009]
● Define a coalition game for each model input X
○ Players are the features in the input
○ Gain is the model prediction (output), i.e., gain = F(X)
● Feature attributions are the Shapley values of this game
Shapley Values for Explaining ML models
SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009]
● Define a coalition game for each model input X
○ Players are the features in the input
○ Gain is the model prediction (output), i.e., gain = F(X)
● Feature attributions are the Shapley values of this game
Challenge: Shapley values require the gain to be defined for all subsets of players
● What is the prediction when some players (features) are absent?
i.e., what is F(x_1, <absent>, x_3, …, <absent>)?
Shapley Values for Explaining ML models
Key Idea: Take the expected prediction when the (absent) feature is sampled
from a certain distribution.
Different approaches choose different distributions
● [SHAP, NIPS 2018] Use conditional distribution w.r.t. the present features
● [QII, S&P 2016] Use marginal distribution
● [Strumbelj et al., JMLR 2009] Use uniform distribution
Modeling Feature Absence
Preprint: The Explanation Game: Explaining Machine Learning Models with Cooperative Game
Theory
Exact Shapley value computation is exponential in the number of features
● Shapley values can be expressed as an expectation of marginals
𝜙(i) = ES ~ D [marginal(S, i)]
● Sampling-based methods can be used to approximate the expectation
● See: “Computational Aspects of Cooperative Game Theory”, Chalkiadakis et al. 2011
● The method is still computationally infeasible for models with hundreds of
features, e.g., image models
Computing Shapley Values
● Values of Non-Atomic Games (1974): Aumann and Shapley extend their
method → players can contribute fractionally
● Aumann-Shapley values calculated by integrating along a straight-line path…
same as Integrated Gradients!
● IG through a game theory lens: continuous game, feature absence is modeled
by replacement with a baseline value
● Axiomatically justified as a result:
○ Integrated Gradients is the unique path-integral method satisfying: Sensitivity, Insensitivity,
Linearity preservation, Implementation invariance, Completeness, and Symmetry
Non-atomic Games: Aumann-Shapley Values and IG
Baselines (or Norms) are essential to explanations [Kahneman-Miller 86]
● E.g., A man suffers from indigestion. Doctor blames it to a stomach ulcer. Wife blames
it on eating turnips. Both are correct relative to their baselines.
● The baseline may also be an important analysis knob.
Attributions are contrastive, whether we think about it or not.
Lesson learned: baselines are important
Some limitations and caveats for attributions
Some things that are missing:
● Feature interactions (ignored or averaged out)
● What training examples influenced the prediction (training agnostic)
● Global properties of the model (prediction-specific)
An instance where attributions are useless:
● A model that predicts TRUE when there are even number of black pixels and
FALSE otherwise
Attributions don’t explain everything
Attributions are for human consumption
Naive scaling of attributions
from 0 to 255
Attributions have a large
range and long tail
across pixels
After clipping attributions
at 99% to reduce range
● Humans interpret attributions and generate insights
○ Doctor maps attributions for x-rays to pathologies
● Visualization matters as much as the attribution technique
Other individual prediction explanation methods
Local Interpretable Model-agnostic Explanations
(Ribeiro et al. KDD 2016)
118
Figure credit: Anchors: High-Precision Model-Agnostic
Explanations. Ribeiro et al. AAAI 2018
Figure credit: Ribeiro et al. KDD 2016
Anchors
119
Figure credit: Anchors: High-Precision Model-Agnostic Explanations. Ribeiro et al. AAAI 2018
Influence functions
● Trace a model’s prediction through the learning algorithm and
back to its training data
● Training points “responsible” for a given prediction
120
Figure credit: Understanding Black-box Predictions via Influence Functions. Koh and Liang. ICML 2017
Example based Explanations
121
● Prototypes: Representative of all the training data.
● Criticisms: Data instance that is not well represented by the set of prototypes.
Figure credit: Examples are not Enough, Learn to Criticize! Criticism for Interpretability. Kim, Khanna and Koyejo. NIPS 2016
Learned prototypes and criticisms from Imagenet dataset (two types of dog breeds)
Global Explanations
122
Global Explanations Methods
● Partial Dependence Plot: Shows
the marginal effect one or two
features have on the predicted
outcome of a machine learning model
123
Global Explanations Methods
● Permutations: The importance of a feature is the increase in the prediction error of the model
after we permuted the feature’s values, which breaks the relationship between the feature and the
true outcome.
124
Achieving Explainable AI
Approach 1: Post-hoc explain a given AI model
● Individual prediction explanations in terms of input features, influential examples,
concepts, local decision rules
● Global prediction explanations in terms of entire model in terms of partial
dependence plots, global feature importance, global decision rules
Approach 2: Build an interpretable model
● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive
Models (GAMs)
125
Decision Trees
126
Is the person fit?
Age < 30 ?
Eats a lot of pizzas?
Exercises in the morning?
Unfit UnfitFit Fit
Yes No
Yes
Yes No
No
Decision Set
127
Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach,
Leskovec
Decision Set
128
Decision List
129
Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach,
Leskovec
Falling Rule List
A falling rule list is an ordered list of if-then rules (falling rule lists are a type of
decision list), such that the estimated probability of success decreases
monotonically down the list. Thus, a falling rule list directly contains the decision-
making process, whereby the most at-risk observations are classified first, then
the second set, and so on.
130
Box Drawings for Rare Classes
131
Figure credit: Box Drawings for Learning with Imbalanced. Data Siong Thye Goh and Cynthia Rudin
Supersparse Linear Integer Models for Optimized
Medical Scoring Systems
Figure credit: Supersparse Linear Integer Models for Optimized Medical Scoring Systems. Berk Ustun and Cynthia Rudin
132
K- Nearest Neighbors
133
Explanation in terms of nearest training
data points responsible for the decision
GLMs and GAMs
134
Intelligible Models for Classification and Regression. Lou, Caruana and Gehrke KDD 2012
Accurate Intelligible Models with Pairwise Interactions. Lou, Caruana, Gehrke and Hooker. KDD 2013
Explainable Machine Learning
(from a Knowledge Graph Perspective)
135
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
August 28th, 2019 Tutorial on Explainable AI 136
Knowledge Graph (1)
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
August 28th, 2019 Tutorial on Explainable AI 137
Knowledge Graph (2)
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
August 28th, 2019 Tutorial on Explainable AI 138
Knowledge Graph Construction
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
https://stats.stackexchange.com/questions/230581/decision
-tree-too-large-to-interpret
Knowledge Graph in Machine Learning (1)
Augmenting (input) features
with more semantics such as
knowledge graph embeddings /
entities
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
https://stats.stackexchange.com/questions/230581/decision
-tree-too-large-to-interpret
Knowledge Graph in Machine Learning (2)
Augmenting machine learning
models with more semantics
such as knowledge graphs
entities
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
Training
Data
Input
(unlabeled
image)
Neurons respond
to simple shapes
Neurons respond to
more complex
structures
Neurons respond to
highly complex,
abstract concepts
1st Layer
2nd Layer
nth Layer
Low-level
features to
high-level
features
Knowledge Graph in Machine Learning (3)
Augmenting (intermediate)
features with more semantics
such as knowledge graph
embeddings / entities
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
Training
Data
Input
(unlabeled
image)
Neurons respond
to simple shapes
Neurons respond to
more complex
structures
Neurons respond to
highly complex,
abstract concepts
1st Layer
2nd Layer
nth Layer
Low-level
features to
high-level
features
Knowledge Graph in Machine Learning (4)
Augmenting (input,
intermediate) features –
output relationship with more
semantics to capture causal
relationship
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
Knowledge Graph in Machine Learning (5)
Description 1: This is an orange train accident
Description 2: This is a train accident between two speed
merchant trains of characteristics X43-B and Y33-C in a dry
environment
Description 3: This is a public transportation accident
Augmenting models with
semantics to support
personalized explanation
Freddy Lécué: On the role of knowledge graphs in
explainable AI. Semantic Web 11(1): 41-51 (2020)
Knowledge Graph in Machine Learning (6)
“How to explain transfer learning with
appropriate knowledge representation?
Proceedings of the Sixteenth International Conference on Principles of
Knowledge Representation and Reasoning (KR 2018)
Knowledge-Based Transfer Learning Explanation
Huajun Chen
College of Computer Science, Zhejiang University, China
Alibaba-Zhejian University Frontier Technology Research Center
Jiaoyan Chen
Department of Computer Science
University of Oxford, UK
Freddy Lecue
INRIA, France
Accenture Labs, Ireland
Jeff Z. Pan
Department of Computer Science
University of Aberdeen, UK
Ian Horrocks
Department of Computer Science
University of Oxford, UK
Augmenting input features and
domains with semantics to
support interpretable transfer
learning
How Does
it
Work
in Practice?
State of the Art
Machine Learning
Applied to Critical
Systems
Object (Obstacle)
Detection Task
Lumbermill - .59
Object (Obstacle)
Detection Task State-
of-the-art ML Result
Lumbermill - .59
Object (Obstacle)
Detection Task State-
of-the-art ML Result
Boulder - .09
Railway - .11
State of the Art
XAI
Applied to Critical
Systems
Lumbermill - .59
Object (Obstacle)
Detection Task
State-of-the-art XAI
Result
Unfortunately, this is of
NO use for a human
behind the system
Let’s stay back
Why this Explanation?
(meta explanation)
Object (Obstacle)
Detection Task State-
of-the-art Result
Lumbermill - .59
After Human Reasoning…
Lumbermill - .59
What is missing?
Lumbermill - .59
Boulder - .09
Railway - .11
Context
matters
• Hardware: High performance, scalable, generic (to different
FGPA family) & portable CNN dedicated programmable
processor implemented on an FPGA for real-time embedded
inference
• Software: Knowledge graph extension of object detection
Transitionin
g
This is an Obstacle: Boulder obstructing the train:
XG142-R on Rail_Track from City: Cannes to City:
Marseille at Location: Tunnel VIX due to Landslide
Tunnel - .74
Boulder - .81
Railway - .90
Rail
Trac
k
Boulder
Trai
n
operatin
g
on
Obstacle
Tunne
l
obstructing
Landslide
Freddy Lécué, Jiaoyan Chen, Jeff Z. Pan,
Huajun Chen: Augmenting Transfer
Learning with Semantic Reasoning. IJCAI
2019: 1779-1785
Freddy Lécué, Tanguy Pommellet: Feeding
Machine Learning with Knowledge Graphs
for Explainable Object Detection. ISWC
Satellites 2019: 277-280
Freddy Lécué, Baptiste Abeloos, Jonathan
Anctil, Manuel Bergeron, Damien Dalla-
Rosa, Simon Corbeil-Letourneau, Florian
Martet, Tanguy Pommellet, Laura Salvan,
Simon Veilleux, Maryam Ziaeefard: Thales
XAI Platform: Adaptable Explanation of
Machine Learning Systems - A Knowledge
Graphs Perspective. ISWC Satellites 2019:
315-316
Jiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian
Horrocks, Huajun Chen: Knowledge-Based
Transfer Learning Explanation. KR 2018:
349-358
Knowledge Graph in Machine Learning - An Implementation
XAI Case Studies in Industry:
Applications, Lessons Learned, and Research Challenges
160
Challenge: Object detection is usually performed from a
large portfolio of Artificial Neural Networks (ANNs)
architectures trained on large amount of labelled data.
Explaining object detections is rather difficult due to the
high complexity of the most accurate ANNs.
AI Technology: Integration of AI related technologies
i.e., Machine Learning (Deep Learning / CNNs), and
knowledge graphs / linked open data.
XAI Technology: Knowledge graphs and Artificial
Neural Networks
Explainable Boosted Object Detection – Industry Agnostic
Context
● Explanation in Machine Learning systems has been identified to be
the one asset to have for large scale deployment of Artificial
Intelligence (AI) in critical systems
● Explanations could be example-based (who is similar), features-based
(what is driving decision), or even counterfactual (what-if scenario) to
potentially action on an AI system; they could be represented in many
different ways e.g., textual, graphical, visual
Goal
● All representations serve different means, purpose and operators. We
designed the first-of-its-kind XAI platform for critical systems i.e., the
Thales Explainable AI Platform which aims at serving explanations
through various forms
Approach: Model-Agnostic
● [AI:ML] Grad-Cam, Shapley, Counter-factual, Knowledge graph
Thales XAI
Platform
Challenge: Designing Artificial Neural Network
architectures requires lots of experimentation
(i.e., training phases) and parameters tuning
(optimization strategy, learning rate, number of
layers…) to reach optimal and robust machine
learning models.
AI Technology: Artificial Neural Network
XAI Technology: Artificial Neural Network, 3D
Modeling and Simulation Platform For AI
Debugging Artificial Neural Networks – Industry Agnostic
Zetane.com
Challenge: Public transportation is getting more and more
self-driving vehicles. Even if trains are getting more and more
autonomous, the human stays in the loop for critical decision,
for instance in case of obstacles. In case of obstacles trains
are required to provide recommendation of action i.e., go on
or go back to station. In such a case the human is required to
validate the recommendation through an explanation exposed
by the train or machine.
AI Technology: Integration of AI related technologies i.e.,
Machine Learning (Deep Learning / CNNs), and semantic
segmentation.
XAI Technology: Deep learning and Epistemic uncertainty
Obstacle Identification Certification (Trust) - Transportation
Challenge: Predicting and explaining
aircraft engine performance
AI Technology: Artificial Neural Networks
XAI Technology: Shapely Values
Explaining Flight Performance- Transportation
Challenge: Globally 323,454 flights are delayed every year.
Airline-caused delays totaled 20.2 million minutes last year,
generating huge cost for the company. Existing in-house
technique reaches 53% accuracy for predicting flight delay,
does not provide any time estimation (in minutes as opposed
to True/False) and is unable to capture the underlying
reasons (explanation).
AI Technology: Integration of AI related technologies i.e.,
Machine Learning (Deep Learning / Recurrent neural
Network), Reasoning (through semantics-augmented case-
based reasoning) and Natural Language Processing for
building a robust model which can (1) predict flight delays in
minutes, (2) explain delays by comparing with historical
cases.
XAI Technology: Knowledge graph embedded Sequence
Learning using LSTMsJiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian Horrocks, Huajun Chen: Knowledge-Based Transfer
Learning Explanation. KR 2018: 349-358
Nicholas McCarthy, Mohammad Karzand, Freddy Lecue: Amsterdam to Dublin Eventually Delayed?
LSTM and Transfer Learning for Predicting Delays of Low Cost Airlines: AAAI 2019
Explainable On-Time Performance - Transportation
Challenge: Accenture is managing every year more than
80,000 opportunities and 35,000 contracts with an expected
revenue of $34.1 billion. Revenue expectation does not
meet estimation due to the complexity and risks of critical
contracts. This is, in part, due to the (1) large volume of
projects to assess and control, and (2) the existing non-
systematic assessment process.
AI Technology: Integration of AI technologies i.e., Machine
Learning, Reasoning, Natural Language Processing for
building a robust model which can (1) predict revenue loss,
(2) recommend corrective actions, and (3) explain why such
actions might have a positive impact.
XAI Technology: Knowledge graph embedded Random
Forrest
Copyright © 2017 Accenture. All rights reserved.
Jiewen Wu, Freddy Lécué, Christophe Guéret, Jer Hayes, Sara van de Moosdijk, Gemma
Gallagher, Peter McCanney, Eugene Eichelberger: Personalizing Actions in Context for Risk
Management Using Semantic Web Technologies. International Semantic Web Conference (2)
2017: 367-383
Explainable Risk Management - Finance
Challenge: Predicting and explaining abnormally employee expenses (as high accommodation price in 1000+ cities).
AI Technology: Various techniques have been matured over the last two decades to achieve excellent results. However most methods address the problem
from a statistic and pure data-centric angle, which in turn limit any interpretation. We elaborated a web application running live with real data from (i) travel and
expenses from Accenture, (ii) external data from third party such as Google Knowledge Graph, DBPedia (relational DataBase version of Wikipedia) and social
events from Eventful, for explaining abnormalities.
XAI Technology: Knowledge graph embedded Ensemble Learning
Freddy Lécué, Jiewen Wu: Explaining and predicting abnormal
expenses at large scale using knowledge graph based
reasoning. J. Web Sem. 44: 89-103 (2017)
Explainable Anomaly Detection – Finance (Compliance)
Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, Freddy Lécué: Interpretable Credit Application Predictions With Counterfactual Explanations.
FEAP-AI4fin workshop, NeurIPS, 2018.
Counterfactual Explanations for Credit Decisions (3) - Finance
Challenge: Explaining medical condition relapse in the
context of oncology.
AI Technology: Relational learning
XAI Technology: Knowledge graphs and Artificial
Neural Networks
Explanation of Medical Condition Relapse – Health
Knowledge graph
parts explaining
medical condition
relapse
Case Study:
Talent Platform
“Diversity Insights and Fairness-Aware Ranking”
Sahin Cem Geyik, Krishnaram
Kenthapadi
173
Guiding Principle:
“Diversity by Design”
Insights to
Identify Diverse
Talent Pools
Representative
Talent Search
Results
Diversity
Learning
Curriculum
“Diversity by Design” in LinkedIn’s Talent Solutions
Plan for Diversity
Plan for Diversity
Identify Diverse Talent Pools
Inclusive Job Descriptions / Recruiter Outreach
Representative Ranking for Talent Search
S. C. Geyik, S. Ambler,
K. Kenthapadi, Fairness-Aware
Ranking in Search &
Recommendation Systems with
Application to LinkedIn Talent
Search, KDD’19.
[Microsoft’s AI/ML
conference
(MLADS’18). Distinguished
Contribution Award]
Building Representative
Talent Search at LinkedIn
(LinkedIn engineering blog)
Intuition for Measuring and Achieving Representativeness
Ideal: Top ranked results should follow a desired distribution on
gender/age/…
E.g., same distribution as the underlying talent pool
Inspired by “Equal Opportunity” definition [Hardt et al, NIPS’16]
Defined measures (skew, divergence) based on this intuition
Desired Proportions within the Attribute of Interest
Compute the proportions of the values of the attribute (e.g., gender,
gender-age combination) amongst the set of qualified candidates
● “Qualified candidates” = Set of candidates that match the
search query criteria
● Retrieved by LinkedIn’s Galene search engine
Desired proportions could also be obtained based on legal
mandate / voluntary commitment
Fairness-aware Reranking Algorithm (Simplified)
Partition the set of potential candidates into different buckets for
each attribute value
Rank the candidates in each bucket according to the scores
assigned by the machine-learned model
Merge the ranked lists, balancing the representation requirements
and the selection of highest scored candidates
Representation requirement: Desired distribution on gender/age/…
Algorithmic variants based on how we achieve this balance
Validating Our Approach
Gender Representativeness
● Over 95% of all searches are representative compared to the
qualified population of the search
Business Metrics
● A/B test over LinkedIn Recruiter users for two weeks
● No significant change in business metrics (e.g., # InMails sent
or accepted)
Ramped to 100% of LinkedIn Recruiter users worldwide
Lessons
learned
• Post-processing approach desirable
• Model agnostic
• Scalable across different model choices
for our application
• Acts as a “fail-safe”
• Robust to application-specific business
logic
• Easier to incorporate as part of existing
systems
• Build a stand-alone service or component
for post-processing
• No significant modifications to the existing
components
• Complementary to efforts to reduce bias from
training data & during model training
• Collaboration/consensus across key stakeholders
Acknowledgements
LinkedIn Talent Solutions Diversity team, Hire & Careers AI team, Anti-abuse AI team, Data Science
Applied Research team
Special thanks to Deepak Agarwal, Parvez Ahammad, Stuart Ambler, Kinjal Basu, Jenelle Bray, Erik
Buchanan, Bee-Chung Chen, Patrick Cheung, Gil Cottle, Cyrus DiCiccio, Patrick Driscoll, Carlos Faham,
Nadia Fawaz, Priyanka Gariba, Meg Garlinghouse, Gurwinder Gulati, Rob Hallman, Sara Harrington,
Joshua Hartman, Daniel Hewlett, Nicolas Kim, Rachel Kumar, Nicole Li, Heloise Logan, Stephen Lynch,
Divyakumar Menghani, Varun Mithal, Arashpreet Singh Mor, Tanvi Motwani, Preetam Nandy, Lei Ni,
Nitin Panjwani, Igor Perisic, Hema Raghavan, Romer Rosales, Guillaume Saint-Jacques, Badrul Sarwar,
Amir Sepehri, Arun Swami, Ram Swaminathan, Grace Tang, Ketan Thakkar, Sriram Vasudevan,
Janardhanan Vembunarayanan, James Verbus, Xin Wang, Hinkmond Wong, Ya Xu, Lin Yang, Yang Yang,
Chenhui Zhai, Liang Zhang, Yani Zhang
Engineering for Fairness in AI Lifecycle
Problem
Formation
Dataset
Construction
Algorithm
Selection
Training
Process
Testing
Process
Deployment
Feedback
Is an algorithm an
ethical solution to our
problem?
Does our data include enough
minority samples?
Are there missing/biased
features?
Do we need to apply debiasing
algorithms to preprocess our
data?
Do we need to include fairness
constraints in the function?
Have we evaluated the model
using relevant fairness
metrics?
Are we deploying our model
on a population that we did
not train/test on?
Are there unequal effects
across users?
Does the model encourage
feedback loops that can
produce increasingly unfair
outcomes?
Credit: K. Browne & J. Draper
Engineering for Fairness in AI Lifecycle
S.Vasudevan, K. Kenthapadi, FairScale: A Scalable Framework for Measuring Fairness in AI Applications, 2019
FairScale System Architecture [Vasudevan & Kenthapadi, 2019]
• Flexibility of Use
(Platform agnostic)
• Ad-hoc exploratory
analyses
• Deployment in offline
workflows
• Integration with ML
Frameworks
• Scalability
• Diverse fairness
metrics
• Conventional fairness
metrics
• Benefit metrics
• Statistical tests
Fairness-aware experimentation
[Saint-Jacques and Sepehri, KDD’19 Social Impact Workshop]
Imagine LinkedIn has 10 members.
Each of them has 1 session a day.
A new product increases sessions by +1 session per member on average.
Both of these are +1 session / member on average!
One is much more unequal than the other. We want to catch that.
Case Study:
Talent Search
Varun Mithal, Girish Kathalagiri, Sahin Cem Geyik
191
LinkedIn Recruiter
● Recruiter Searches for Candidates
○ Standardized and free-text search criteria
● Retrieval and Ranking
○ Filter candidates using the criteria
○ Rank candidates in multiple levels using ML
models
192
Modeling Approaches
● Pairwise XGBoost
● GLMix
● DNNs via TensorFlow
● Optimization Criteria: inMail Accepts
○ Positive: inMail sent by recruiter, and positively responded by candidate
■ Mutual interest between the recruiter and the candidate
193
Feature Importance in XGBoost
194
How We Utilize Feature Importances for GBDT
● Understanding feature digressions
○ Which a feature that was impactful no longer is?
○ Should we debug feature generation?
● Introducing new features in bulk and identifying effective ones
○ An activity feature for last 3 hours, 6 hours, 12 hours, 24 hours introduced (costly to compute)
○ Should we keep all such features?
● Separating the factors for that caused an improvement
○ Did an improvement come from a new feature, or a new labeling strategy, data source?
○ Did the ordering between features change?
● Shortcoming: A global view, not case by case
195
GLMix Models
● Generalized Linear Mixed Models
○ Global: Linear Model
○ Per-contract: Linear Model
○ Per-recruiter: Linear Model
● Lots of parameters overall
○ For a specific recruiter or contract the weights can be summed up
● Inherently explainable
○ Contribution of a feature is “weight x feature value”
○ Can be examined in a case-by-case manner as well
196
TensorFlow Models in Recruiter and Explaining Them
● We utilize the Integrated Gradients [ICML 2017] method
● How do we determine the baseline example?
○ Every query creates its own feature values for the same candidate
○ Query match features, time-based features
○ Recruiter affinity, and candidate affinity features
○ A candidate would be scored differently by each query
○ Cannot recommend a “Software Engineer” to a search for a “Forensic Chemist”
○ There is no globally neutral example for comparison!
197
Query-Specific Baseline Selection
● For each query:
○ Score examples by the TF model
○ Rank examples
○ Choose one example as the baseline
○ Compare others to the baseline example
● How to choose the baseline example
○ Last candidate
○ Kth percentile in ranking
○ A random candidate
○ Request by user (answering a question like: “Why was I presented candidate x above
candidate y?”)
198
Example
199
Example - Detailed
200
Feature Description Difference (1 vs 2) Contribution
Feature………. Description………. -2.0476928 -2.144455602
Feature………. Description………. -2.3223877 1.903594618
Feature………. Description………. 0.11666667 0.2114946752
Feature………. Description………. -2.1442587 0.2060414469
Feature………. Description………. -14 0.1215354111
Feature………. Description………. 1 0.1000282466
Feature………. Description………. -92 -0.085286277
Feature………. Description………. 0.9333333 0.0568533262
Feature………. Description………. -1 -0.051796317
Feature………. Description………. -1 -0.050895940
Pros & Cons
● Explains potentially very complex models
● Case-by-case analysis
○ Why do you think candidate x is a better match for my position?
○ Why do you think I am a better fit for this job?
○ Why am I being shown this ad?
○ Great for debugging real-time problems in production
● Global view is missing
○ Aggregate Contributions can be computed
○ Could be costly to compute
201
Lessons Learned and Next Steps
● Global explanations vs. Case-by-case Explanations
○ Global gives an overview, better for making modeling decisions
○ Case-by-case could be more useful for the non-technical user, better for debugging
● Integrated gradients worked well for us
○ Complex models make it harder for developers to map improvement to effort
○ Use-case gave intuitive results, on top of completely describing score differences
● Next steps
○ Global explanations for Deep Models
202
Case Study:
Model Interpretation for Predictive Models in B2B
Sales Predictions
Jilei Yang, Wei Di, Songtao Guo
203
Problem Setting
● Predictive models in B2B sales prediction
○ E.g.: random forest, gradient boosting, deep neural network, …
○ High accuracy, low interpretability
● Global feature importance → Individual feature reasoning
204
Example
205
Revisiting LIME
● Given a target sample 𝑥 𝑘, approximate its prediction 𝑝𝑟𝑒𝑑(𝑥 𝑘) by building a
sample-specific linear model:
𝑝𝑟𝑒𝑑(𝑋) ≈ 𝛽 𝑘1 𝑋1 + 𝛽 𝑘2 𝑋2 + …, 𝑋 ∈ 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟(𝑥 𝑘)
● E.g., for company CompanyX:
0.76 ≈ 1.82 ∗ 0.17 + 1.61 ∗ 0.11+…
206
xLIME
Piecewise Linear
Regression
Localized Stratified
Sampling
207
Piecewise Linear Regression
Motivation: Separate top positive feature influencers and top negative feature influencers
208
Impact of Piecewise Approach
● Target sample 𝑥 𝑘=(𝑥 𝑘1, 𝑥 𝑘2, ⋯)
● Top feature contributor
○ LIME: large magnitude of 𝛽 𝑘𝑗 ⋅ 𝑥 𝑘𝑗
○ xLIME: large magnitude of 𝛽 𝑘𝑗
− ⋅ 𝑥 𝑘𝑗
● Top positive feature influencer
○ LIME: large magnitude of 𝛽 𝑘𝑗
○ xLIME: large magnitude of negative 𝛽 𝑘𝑗
− or positive 𝛽 𝑘𝑗
+
● Top negative feature influencer
○ LIME: large magnitude of 𝛽 𝑘𝑗
○ xLIME: large magnitude of positive 𝛽 𝑘𝑗
− or negative 𝛽 𝑘𝑗
+
209
Localized Stratified Sampling: Idea
Method: Sampling based on empirical distribution around target value at each feature level
210
Localized Stratified Sampling: Method
● Sampling based on empirical distribution around target value for each feature
● For target sample 𝑥 𝑘 = (𝑥 𝑘1 , 𝑥 𝑘2 , ⋯), sampling values of feature 𝑗 according to
𝑝𝑗 (𝑋𝑗) ⋅ 𝑁(𝑥 𝑘𝑗 , (𝛼 ⋅ 𝑠𝑗 )2)
○ 𝑝𝑗 (𝑋𝑗) : empirical distribution.
○ 𝑥 𝑘𝑗 : feature value in target sample.
○ 𝑠𝑗 : standard deviation.
○ 𝛼 : Interpretable range: tradeoff between interpretable coverage and local accuracy.
● In LIME, sampling according to 𝑁(𝑥𝑗 , 𝑠𝑗
2).
211
_
Summary
212
LTS LCP (LinkedIn Career Page) Upsell
● A subset of churn data
○ Total Companies: ~ 19K
○ Company features: 117
● Problem: Estimate whether there will be upsell given a set of features about
the company’s utility from the product
213
Top Feature Contributor
214
215
Top Feature Influencers
216
Key Takeaways
● Looking at the explanation as contributor vs. influencer features is useful
○ Contributor: Which features end-up in the current outcome case-by-case
○ Influencer: What needs to be done to improve likelihood, case-by-case
● xLIME aims to improve on LIME via:
○ Piecewise linear regression: More accurately describes local point, helps with finding correct
influencers
○ Localized stratified sampling: More realistic set of local points
● Better captures the important features
217
Case Study:
Relevance Debugging and Explaining @
Daniel Qiu, Yucheng Qian
218
Debugging Relevance Models
219
Architecture
220
What Could Go Wrong?
221
Challenges
222
Solution
223
Call Graph
224
Timing
225
Features
226
Advanced Use Cases
227
Perturbation
228
Comparison
229
Holistic Comparison
230
Granular Comparison
231
Replay
232
Teams
● Search
● Feed
● Comments
● People you may know
● Jobs you may be interested in
● Notification
233
Case Study:
Building an Explainable AI Engine @
Luke Merrick
234
All your data
Any data warehouse
Custom Models
Fiddler Modeling Layer
Explainable AI for everyone
APIs, Dashboards, Reports, Trusted Insights
Fiddler’s Explainable AI Engine
Mission: Unlock Trust, Visibility and Insights by making AI Explainable in every enterprise
Credit Line Increase
Fair lending laws [ECOA, FCRA] require credit decisions to be explainable
Bank Credit Lending Model
Why? Why not? How?
? Request Denied
Query AI System
Credit Lending Score =
0.3
Example: Credit Lending in a black-box ML world
How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
How Can This Help…
Customer Support
Why was a customer loan
rejected?
Bias & Fairness
How is my model doing
across demographics?
Lending LOB
What variables should they
validate with customers on
“borderline” decisions?
Explain individual predictions (using Shapley Values)
Probe the
model on
counterfactuals
How Can This Help…
Customer Support
Why was a customer loan
rejected?
Why was the credit card limit
low?
Why was this transaction
marked as fraud?
Integrating explanations
How Can This Help…
Global Explanations
What are the primary feature
drivers of the dataset on my
model?
Region Explanations
How does my model
perform on a certain slice?
Where does the model not
perform well? Is my model
uniformly fair across slices?
Slice & Explain
Model Monitoring: Feature Drift
Investigate Data Drift Impacting Model Performance
Time slice
Feature distribution for
time slice relative to
training distribution
How Can This Help…
Operations
Why are there outliers in
model predictions? What
caused model performance
to go awry?
Data Science
How can I improve my ML
model? Where does it not do
well?
Model Monitoring: Outliers with Explanations
Outlier
Individual
Explanations
Some lessons learned at Fiddler
● Attributions are contrastive to their baselines
● Explaining explanations is important (e.g. good UI)
● In practice, we face engineering challenges as much as
theoretical challenges
244
Recap
● Part I: Introduction and Motivation
○ Motivation, Definitions & Properties
○ Evaluation Protocols & Metrics
● Part II: Explanation in AI (not only Machine Learning!)
○ From Machine Learning to Knowledge Representation and Reasoning and Beyond
● Part III: Explainable Machine Learning (from a Machine Learning Perspective)
● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective)
● Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges
245
Challenges & Tradeoffs
246
User PrivacyTransparency
Fairness Performance
?
● Lack of standard interface for ML models
makes pluggable explanations hard
● Explanation needs vary depending on the type
of the user who needs it and also the problem
at hand.
● The algorithm you employ for explanations
might depend on the use-case, model type,
data format, etc.
● There are trade-offs w.r.t. Explainability,
Performance, Fairness, and Privacy.
Explainability in ML: Broad Challenges
Actionable explanations
Balance between explanations & model secrecy
Robustness of explanations to failure modes (Interaction between ML
components)
Application-specific challenges
Conversational AI systems: contextual explanations
Gradation of explanations
Tools for explanations across AI lifecycle
Pre & post-deployment for ML models
Model developer vs. End user focused
Thanks! Questions?
● Feedback most welcome :-)
○ freddy.lecue@inria.fr, krishna@fiddler.ai, sgeyik@linkedin.com,
kenthk@amazon.com, vamithal@linkedin.com, ankur@fiddler.ai,
luke@fiddler.ai, p.minervini@ucl.ac.uk, riccardo.guidotti@unipi.it
● Tutorial website: https://xaitutorial2020.github.io
● To try Fiddler, please send an email to info@fiddler.ai
● To try Thales XAI Platform , please send an email to freddy.lecue@thalesgroup.com
248https://xaitutorial2020.github.io 248

Contenu connexe

Tendances

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Raheel Ahmad
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictionsAnton Kulesh
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)GoDataDriven
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
Explainable AI
Explainable AIExplainable AI
Explainable AIDinesh V
 
Bias in Artificial Intelligence
Bias in Artificial IntelligenceBias in Artificial Intelligence
Bias in Artificial IntelligenceNeelima Kumar
 
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveGenerative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & ConcernsAjitesh Kumar
 
An Introduction to Generative AI - May 18, 2023
An Introduction  to Generative AI - May 18, 2023An Introduction  to Generative AI - May 18, 2023
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
 
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...DianaGray10
 
Bias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachBias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachEirini Ntoutsi
 
Exploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdfExploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
 
The current state of generative AI
The current state of generative AIThe current state of generative AI
The current state of generative AIBenjaminlapid1
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learninginovex GmbH
 
A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception Dr. Kim (Kyllesbech Larsen)
 
AI in Business: Opportunities & Challenges
AI in Business: Opportunities & ChallengesAI in Business: Opportunities & Challenges
AI in Business: Opportunities & ChallengesTathagat Varma
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
 

Tendances (20)

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex models
 
Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models? Explainable AI: Building trustworthy AI models?
Explainable AI: Building trustworthy AI models?
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictions
 
Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)Fairness in AI (DDSW 2019)
Fairness in AI (DDSW 2019)
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Bias in Artificial Intelligence
Bias in Artificial IntelligenceBias in Artificial Intelligence
Bias in Artificial Intelligence
 
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveGenerative AI: Past, Present, and Future – A Practitioner's Perspective
Generative AI: Past, Present, and Future – A Practitioner's Perspective
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
An Introduction to Generative AI - May 18, 2023
An Introduction  to Generative AI - May 18, 2023An Introduction  to Generative AI - May 18, 2023
An Introduction to Generative AI - May 18, 2023
 
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
AI and ML Series - Leveraging Generative AI and LLMs Using the UiPath Platfor...
 
Bias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approachBias in AI-systems: A multi-step approach
Bias in AI-systems: A multi-step approach
 
Exploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdfExploring Opportunities in the Generative AI Value Chain.pdf
Exploring Opportunities in the Generative AI Value Chain.pdf
 
The current state of generative AI
The current state of generative AIThe current state of generative AI
The current state of generative AI
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception A Tutorial to AI Ethics - Fairness, Bias & Perception
A Tutorial to AI Ethics - Fairness, Bias & Perception
 
AI in Business: Opportunities & Challenges
AI in Business: Opportunities & ChallengesAI in Business: Opportunities & Challenges
AI in Business: Opportunities & Challenges
 
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Algorithmic Bias:  Challenges and Opportunities for AI in HealthcareAlgorithmic Bias:  Challenges and Opportunities for AI in Healthcare
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
 

Similaire à Explainable AI in Industry (AAAI 2020 Tutorial)

GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Raheel Ahmad
 
Trustworthy Recommender Systems
Trustworthy Recommender SystemsTrustworthy Recommender Systems
Trustworthy Recommender SystemsWQ Fan
 
"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AI"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AIErika Agostinelli
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Debmalya Biswas
 
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...Deep learning fast and slow, a responsible and explainable AI framework - Ahm...
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...Institute of Contemporary Sciences
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneySri Ambati
 
Transform Your Customer Experience with Modern Information Management
Transform Your Customer Experience with Modern Information ManagementTransform Your Customer Experience with Modern Information Management
Transform Your Customer Experience with Modern Information ManagementNuxeo
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
 
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docxEDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docxtidwellveronique
 
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docxEDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docxbudabrooks46239
 

Similaire à Explainable AI in Industry (AAAI 2020 Tutorial) (20)

GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneGDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
Trusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open SourceTrusted, Transparent and Fair AI using Open Source
Trusted, Transparent and Fair AI using Open Source
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
 
Trustworthy Recommender Systems
Trustworthy Recommender SystemsTrustworthy Recommender Systems
Trustworthy Recommender Systems
 
"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AI"I don't trust AI": the role of explainability in responsible AI
"I don't trust AI": the role of explainability in responsible AI
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020
 
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...Deep learning fast and slow, a responsible and explainable AI framework - Ahm...
Deep learning fast and slow, a responsible and explainable AI framework - Ahm...
 
Explainable AI.pptx
Explainable AI.pptxExplainable AI.pptx
Explainable AI.pptx
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Debugging AI
Debugging AIDebugging AI
Debugging AI
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation Journey
 
Transform Your Customer Experience with Modern Information Management
Transform Your Customer Experience with Modern Information ManagementTransform Your Customer Experience with Modern Information Management
Transform Your Customer Experience with Modern Information Management
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docxEDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
 
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docxEDR-8202 Statistics IIWeek 2 Assignment Worksheet  C.docx
EDR-8202 Statistics IIWeek 2 Assignment Worksheet C.docx
 

Plus de Krishnaram Kenthapadi

Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
 
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)Krishnaram Kenthapadi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
 
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Krishnaram Kenthapadi
 
Fairness, Transparency, and Privacy in AI @ LinkedIn
Fairness, Transparency, and Privacy in AI @ LinkedInFairness, Transparency, and Privacy in AI @ LinkedIn
Fairness, Transparency, and Privacy in AI @ LinkedInKrishnaram Kenthapadi
 
Privacy-preserving Analytics and Data Mining at LinkedIn
Privacy-preserving Analytics and Data Mining at LinkedInPrivacy-preserving Analytics and Data Mining at LinkedIn
Privacy-preserving Analytics and Data Mining at LinkedInKrishnaram Kenthapadi
 
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Krishnaram Kenthapadi
 

Plus de Krishnaram Kenthapadi (12)

Amazon SageMaker Clarify
Amazon SageMaker ClarifyAmazon SageMaker Clarify
Amazon SageMaker Clarify
 
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedPrivacy in AI/ML Systems: Practical Challenges and Lessons Learned
Privacy in AI/ML Systems: Practical Challenges and Lessons Learned
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...
 
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WWW 2019 Tutorial)
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...
 
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)
 
Fairness, Transparency, and Privacy in AI @ LinkedIn
Fairness, Transparency, and Privacy in AI @ LinkedInFairness, Transparency, and Privacy in AI @ LinkedIn
Fairness, Transparency, and Privacy in AI @ LinkedIn
 
Privacy-preserving Analytics and Data Mining at LinkedIn
Privacy-preserving Analytics and Data Mining at LinkedInPrivacy-preserving Analytics and Data Mining at LinkedIn
Privacy-preserving Analytics and Data Mining at LinkedIn
 
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
Privacy-preserving Data Mining in Industry: Practical Challenges and Lessons ...
 

Explainable AI in Industry (AAAI 2020 Tutorial)

  • 1. Explainable AI in Industry AAAI 2020 Tutorial Freddy Lecue, Krishna Gade, Sahin Cem Geyik, Krishnaram Kenthapadi, Luke Merrick, Varun Mithal, Ankur Taly, Riccardo Guidotti, Pasquale Minervini https://xaitutorial2020.github.io 1
  • 3. Agenda ● Part I: Introduction and Motivation ○ Motivation, Definitions, Properties, Evaluation ○ Challenges for Explainable AI @ Scale ● Part II: Explanation in AI (not only Machine Learning!) ○ From Machine Learning to Knowledge Representation and Reasoning and Beyond ● Part III: Explainable Machine Learning (from a Machine Learning Perspective) ● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective) ● Part V: Case Studies from Industry ○ Applications, Lessons Learned, and Research Challenges 3
  • 5. AI Adoption: Requirements Trustable AI Valid AI Responsible AI Privacy- preserving AI Explainable AI • Human Interpretable AI • Machine Interpretable AI What is the rational?
  • 7. Explanation - From a Business Perspective 7
  • 11. COMPAS recidivism black bias … but not only Critical Systems (1)
  • 13. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, Noemie Elhadad: Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. KDD 2015: 1721-1730 Patricia Hannon ,https://med.stanford.edu/news/all-news/2018/03/researchers-say-use-of-ai-in-medicine- raises-ethical-questions.html ▌Healthcare • Applying ML methods in medical care is problematic. • AI as 3rd-party actor in physician- patient relationship • Responsibility, confidentiality? • Learning must be done with available data. Cannot randomize cares given to patients! • Must validate models before use. … but not only Critical Systems (3)
  • 14. Black-box AI creates business risk for Industry
  • 15. Internal Audit, Regulators IT & Operations Data Scientists Business Owner Can I trust our AI decisions? Are these AI system decisions fair? Customer Support How do I answer this customer complaint? How do I monitor and debug this model? Is this the best model that can be built? Black-box AI Why I am getting this decision? How can I get a better decision? Poor Decision Black-box AI creates confusion and doubt
  • 16. Explanation - From a Model Perspective 16
  • 17. Why Explainability: Debug (Mis-)Predictions 17 Top label: “clog” Why did the network label this image as “clog”?
  • 18. 18 Why Explainability: Improve ML Model Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
  • 19. Why Explainability: Verify the ML Model / System 19 Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
  • 20. 20 Why Explainability: Learn New Insights Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
  • 21. 21 Why Explainability: Learn Insights in the Sciences Credit: Samek, Binder, Tutorial on Interpretable ML, MICCAI’18
  • 22. Explanation - From a Regulatory Perspective 22
  • 23. Immigration Reform and Control Act Citizenship Rehabilitation Act of 1973; Americans with Disabilities Act of 1990 Disability status Civil Rights Act of 1964 Race Age Discrimination in Employment Act of 1967 Age Equal Pay Act of 1963; Civil Rights Act of 1964 Sex And more... Why Explainability: Laws against Discrimination 23
  • 25. GDPR Concerns Around Lack of Explainability in AI “ Companies should commit to ensuring systems that could fall under GDPR, including AI, will be compliant. The threat of sizeable fines of €20 million or 4% of global turnover provides a sharp incentive. Article 22 of GDPR empowers individuals with the right to demand an explanation of how an AI system made a decision that affects them. ” - European Commision VP, European Commision
  • 28. Why Explainability: Growing Global AI Regulation ● GDPR: Article 22 empowers individuals with the right to demand an explanation of how an automated system made a decision that affects them. ● Algorithmic Accountability Act 2019: Requires companies to provide an assessment of the risks posed by the automated decision system to the privacy or security and the risks that contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers ● California Consumer Privacy Act: Requires companies to rethink their approach to capturing, storing, and sharing personal data to align with the new requirements by January 1, 2020. ● Washington Bill 1655: Establishes guidelines for the use of automated decision systems to protect consumers, improve transparency, and create more market predictability. ● Massachusetts Bill H.2701: Establishes a commission on automated decision-making, transparency, fairness, and individual rights. ● Illinois House Bill 3415: States predictive data analytics determining creditworthiness or hiring decisions may not include information that correlates with the applicant race or zip code.
  • 29. 29 SR 11-7 and OCC regulations for Financial Institutions
  • 30. Model Diagnostics Root Cause Analytics Performance monitoring Fairness monitoring Model Comparison Cohort Analysis Explainable Decisions API Support Model Launch Signoff Model Release Mgmt Model Evaluation Compliance Testing Model Debugging Model Visualization Explainable AI Train QA Predict Deploy A/B Test Monitor Debug Feedback Loop “Explainability by Design” for AI products
  • 31. AI @ Scale - Challenges for Explainable AI 31
  • 32. LinkedIn operates the largest professional network on the Internet 645M+ members 30M+ companies are represented on LinkedIn 90K+ schools listed (high school & college) 35K+ skills listed 20M+ open jobs on LinkedIn Jobs 280B Feed updates
  • 33. 33© 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved | The AWS ML Stack Broadest and most complete set of Machine Learning capabilities VISION SPEECH TEXT SEARCH NEW CHATBOTS PERSONALIZATION FORECASTING FRAUD NEW DEVELOPMENT NEW CONTACT CENTERS NEW Amazon SageMaker Ground Truth Augmented AI SageMaker Neo Built-in algorithms SageMaker Notebooks NEW SageMaker Experiments NEW Model tuning SageMaker Debugger NEW SageMaker Autopilot NEW Model hosting SageMaker Model Monitor NEW Deep Learning AMIs & Containers GPUs & CPUs Elastic Inference Inferentia FPGA Amazon Rekognition Amazon Polly Amazon Transcribe +Medical Amazon Comprehend +Medical Amazon Translate Amazon Lex Amazon Personalize Amazon Forecast Amazon Fraud Detector Amazon CodeGuru AI SERVICES ML SERVICES ML FRAMEWORKS & INFRASTRUCTURE Amazon Textract Amazon Kendra Contact Lens For Amazon Connect SageMaker Studio IDE NEW NEW NEW NEW NE W
  • 34. Explanation - In a Nutshell 34
  • 35. What is Explainable AI? Data Black-Box AI AI product Confusion with Today’s AI Black Box ● Why did you do that? ● Why did you not do that? ● When do you succeed or fail? ● How do I correct an error? Black Box AI Decision, Recommendation Clear & Transparent Predictions ● I understand why ● I understand why not ● I know why you succeed or fail ● I understand, so I trust you Explainable AI Data Explainable AI Explainable AI Product Decision Explanation Feedback
  • 36. - Humans may have follow-up questions - Explanations cannot answer all users’ concerns Weld, D., and Gagan Bansal. "The challenge of crafting intelligible intelligence." Communications of ACM (2018). Example of an End-to-End XAI System
  • 37. Neural Net CNNGAN RNN Ensemble Method Random Forest XGB Statistical Model AOG SVM Graphical Model Bayesian Belief Net SLR CRF HBN MLN Markov Model Decision Tree Linear Model Non-Linear functions Polynomial functions Quasi-Linear functions Accuracy Explainability InterpretabilityLearning • Challenges: • Supervised • Unsupervised learning • Approach: • Representation Learning • Stochastic selection • Output: • Correlation • No causation How to Explain? Accuracy vs. Explainability
  • 38. Oxford Dictionary of English XAI Definitions - Explanation vs. Interpretation
  • 40. KDD 2019 Tutorial on Explainable AI in Industry - https://sites.google.com/view/kdd19-explainable-ai-tutorial Evaluation (1) - Perturbation-based Approaches
  • 41. Evaluation criteria for Explanations [Miller, 2017] ○ Truth & probability ○ Usefulness, relevance ○ Coherence with prior belief ○ Generalization Cognitive chunks = basic explanation units (for different explanation needs) ○ Which basic units for explanations? ○ How many? ○ How to compose them? ○ Uncertainty & end users? [Doshi-Velez and Kim 2017, Poursabzi-Sangdeh 18] Evaluation (2) - Human (Role)-based Evaluation is Essential… but too often based on size!
  • 42. Comprehensibilit y How much effort for correct human interpretation? Succinctness How concise and compact is the explanation? Actionability What can one action, do with the explanation? Reusability Could the explanation be personalized? Accuracy How accurate and precise is the explanation? Completeness Is the explanation complete, partial, restricted? Source: Accenture Point of View. Understanding Machines: Explainable AI. Freddy Lecue, Dadong Wan Evaluation (3) - XAI: One Objective, Many Metrics
  • 43. Explanation in AI (not only Machine Learning!) 43
  • 45. How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Machine Learning Computer Vision Search Planning KRR NLP Game Theory MAS Robotics Artificial Intelligence UAI XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 46. Which features are responsible of classification? Computer Vision Search Planning KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 47. Which features are responsible of classification? Computer Vision Search Planning KRR NLP Game Theory Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Which complex features are responsible of classification? Saliency Map MAS Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 48. Which features are responsible of classification? Computer Vision Search Planning KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 49. Which actions are responsible of a plan? Which features are responsible of classification? Computer Vision Search KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Plan Refinement Planning Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 50. Which features are responsible of classification? Which actions are responsible of a plan? Which constraints can be relaxed? Conflicts Resolution Computer Vision Search KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Plan Refinement Planning Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 51. Which combination of features is optimal? Which features are responsible of classification? Which actions are responsible of a plan? Which constraints can be relaxed? Conflicts Resolution Computer Vision Search KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Plan Refinement Planning Shapely Values Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 52. Which combination of features is optimal? Which features are responsible of classification? Which actions are responsible of a plan? Which constraints can be relaxed? Conflicts Resolution Computer Vision Search KRR NLP Game Theory MAS Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Plan Refinement Planning Shapely Values Narrative-based Which decisions, combination of multimodal decisions lead to an action? Uncertainty Map XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 53. Which combination of features is optimal? Which features are responsible of classification? Which actions are responsible of a plan? Which constraints can be relaxed? Conflicts Resolution Computer Vision Search KRR NLP Game Theory Robotics UAI Surrogate Model Dependency Plot Feature Importance How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence Machine Learning Strategy Summarization Which complex features are responsible of classification? Saliency Map • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Plan Refinement Planning Shapely Values Narrative-based Which decisions, combination of multimodal decisions lead to an action? Which entity is responsible for classification? Machine Learning based Uncertainty Map MAS XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 54. Which complex features are responsible of classification? Which actions are responsible of a plan? Which entity is responsible for classification? Which combination of features is optimal? Which constraints can be relaxed? Which features are responsible of classification? Machine Learning Computer Vision Search Planning KRR NLP Game Theory MAS Surrogate Model Dependency Plot Feature Importance Shapely Values Uncertainty Map Saliency Map Conflicts Resolution Abduction Diagnosis Plan Refinement Strategy Summarization Machine Learning based Narrative-based Robotics • Which axiom is responsible of inference (e.g., classification)? • Abduction/Diagnostic: Find the right root causes (abduction)? How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Which decisions, combination of multimodal decisions lead to an action? UAI XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 55. Uncertainty as an alternative to explanation Which complex features are responsible of classification? Which actions are responsible of a plan? Which entity is responsible for classification? Which combination of features is optimal? Which constraints can be relaxed? Which features are responsible of classification? Machine Learning Computer Vision Search Planning KRR NLP Game Theory MAS Surrogate Model Dependency Plot Feature Importance Shapely Values Uncertainty Map Saliency Map Conflicts Resolution Abduction Diagnosis Plan Refinement Strategy Summarization Machine Learning based Narrative-based Robotics • Which axiom is responsible of inference (e.g., classification)? • Abduction/Diagnostic: Find the right root causes (abduction)? How to summarize the reasons (motivation, justification, understanding) for an AI system behavior, and explain the causes of their decisions? Artificial Intelligence • Which agent strategy & plan ? • Which player contributes most? • Why such a conversational flow? Which decisions, combination of multimodal decisions lead to an action? UAI XAI: One Objective, Many ‘AI’s, Many Definitions, Many Approaches
  • 56. Feature Importance Partial Dependence Plot Individual Conditional Expectation Sensitivity Analysis Naive Bayes model Igor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective. Artificial Intelligence in Medicine, 23:89–109, 2001. Counterfactual What-if Brent D. Mittelstadt, Chris Russell, Sandra Wachter: Explaining Explanations in AI. FAT 2019: 279-288 Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, Freddy Lécué: Interpretable Credit Application Predictions With Counterfactual Explanations. CoRR abs/1811.05245 (2018) Interpretable Models: • Decision Trees, Lists and Sets, • GAMs, • GLMs, • Linear regression, • Logistic regression, • KNNs Overview of Explanation in Machine Learning (1)
  • 57. Auto-encoder / Prototype Oscar Li, Hao Liu, Chaofan Chen, Cynthia Rudin: Deep Learning for Case-Based Reasoning Through Prototypes: A Neural Network That Explains Its Predictions. AAAI 2018: 3530-3537 Surogate Model Mark Craven, Jude W. Shavlik: Extracting Tree-Structured Representations of Trained Networks. NIPS 1995: 24-30 Attribution for Deep Network (Integrated gradient-based) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In ICML, pp. 3319–3328, 2017. Attention Mechanism Avanti Shrikumar, Peyton Greenside, Anshul Kundaje: Learning Important Features Through Propagating Activation Differences. ICML 2017: 3145-3153 D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations, 2015 Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, Walter F. Stewart: RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism. NIPS 2016: 3504- 3512 Chaofan Chen, Oscar Li, Alina Barnett, Jonathan Su, Cynthia Rudin: This looks like that: deep learning for interpretable image recognition. CoRR abs/1806.10574 (2018) Overview of Explanation in Machine Learning (2) ●Artificial Neural Network
  • 58. Uncertainty Map Saliency Map Alex Kendall, Yarin Gal: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? NIPS 2017: 5580-5590 Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, Been Kim: Sanity Checks for Saliency Maps. NeurIPS 2018: 9525-9536 Visual Explanation Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell: Generating Visual Explanations. ECCV (4) 2016: 3-19 David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba: Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR 2017: 3319-3327 Interpretable Units Overview of Explanation in Machine Learning (3) ●Computer Vision
  • 59. Shapley Additive Explanation Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768- 4777 Overview of Explanation in Different AI Fields (1) ●Game Theory
  • 60. Shapley Additive Explanation Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768- 4777 L-Shapley and C-Shapley (with graph structure) Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C- Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019 Overview of Explanation in Different AI Fields (1) ●Game Theory
  • 61. Shapley Additive Explanation Scott M. Lundberg, Su-In Lee: A Unified Approach to Interpreting Model Predictions. NIPS 2017: 4768- 4777 L-Shapley and C-Shapley (with graph structure) Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan: L-Shapley and C- Shapley: Efficient Model Interpretation for Structured Data. ICLR 2019 instance-wise feature importance (causal influence) Erik Štrumbelj and Igor Kononenko. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1–18, 2010. Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on, pp. 598–617. IEEE, 2016. Overview of Explanation in Different AI Fields (1) ●Game Theory
  • 62. Conflicts resolution Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for Over-Constrained Problems. AAAI 2007: 323-328 Robustness Computation Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and optimization. In ECAI (Vol. 16, p. 186). If A+1 then NEW Conflicts on X and Y A • Search and Constraint Satisfaction Overview of Explanation in Different AI Fields (2)
  • 63. Conflicts resolution Barry O'Sullivan, Alexandre Papadopoulos, Boi Faltings, Pearl Pu: Representative Explanations for Over-Constrained Problems. AAAI 2007: 323-328 Constraints relaxation Ulrich Junker: QUICKXPLAIN: Preferred Explanations and Relaxations for Over-Constrained Problems. AAAI 2004: 167-172 Robustness Computation Hebrard, E., Hnich, B., & Walsh, T. (2004, July). Robust solutions for constraint satisfaction and optimization. In ECAI (Vol. 16, p. 186). If A+1 then NEW Conflicts on X and Y A • Search and Constraint Satisfaction Overview of Explanation in Different AI Fields (2)
  • 64. Explaining Reasoning (through Justification) e.g., Subsumption Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1) 1995: 816-821 • Knowledge Representation and Reasoning Overview of Explanation in Different AI Fields (3)
  • 65. Explaining Reasoning (through Justification) e.g., Subsumption Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1) 1995: 816-821 Abduction Reasoning (in Bayesian Network) David Poole: Probabilistic Horn Abduction and Bayesian Networks. Artif. Intell. 64(1): 81-129 (1993) • Knowledge Representation and Reasoning Overview of Explanation in Different AI Fields (3)
  • 66. Explaining Reasoning (through Justification) e.g., Subsumption Deborah L. McGuinness, Alexander Borgida: Explaining Subsumption in Description Logics. IJCAI (1) 1995: 816-821 Diagnosis Inference Alban Grastien, Patrik Haslum, Sylvie Thiébaux: Conflict-Based Diagnosis of Discrete Event Systems: Theory and Practice. KR 2012 Abduction Reasoning (in Bayesian Network) David Poole: Probabilistic Horn Abduction and Bayesian Networks. Artif. Intell. 64(1): 81-129 (1993) • Knowledge Representation and Reasoning Overview of Explanation in Different AI Fields (3)
  • 67. ●Multi-agent Systems Explanation of Agent Conflicts & Harmful Interactions Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A. Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents and Multi-Agent Systems 7(1-2): 29-48 (2003) • Multi-agent Systems Overview of Explanation in Different AI Fields (4)
  • 68. ●Multi-agent Systems Agent Strategy Summarization Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207 Explanation of Agent Conflicts & Harmful Interactions Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A. Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents and Multi-Agent Systems 7(1-2): 29-48 (2003) • Multi-agent Systems Overview of Explanation in Different AI Fields (4)
  • 69. ●Multi-agent Systems Agent Strategy Summarization Ofra Amir, Finale Doshi-Velez, David Sarne: Agent Strategy Summarization. AAMAS 2018: 1203-1207 Explanation of Agent Conflicts & Harmful Interactions Katia P. Sycara, Massimo Paolucci, Martin Van Velsen, Joseph A. Giampapa: The RETSINA MAS Infrastructure. Autonomous Agents and Multi-Agent Systems 7(1-2): 29-48 (2003) Explainable Agents Joost Broekens, Maaike Harbers, Koen V. Hindriks, Karel van den Bosch, Catholijn M. Jonker, John-Jules Ch. Meyer: Do You Get It? User-Evaluated Explainable BDI Agents. MATES 2010: 28-39 W. Lewis Johnson: Agents that Learn to Explain Themselves. AAAI 1994: 1257- 1263 • Multi-agent Systems Overview of Explanation in Different AI Fields (4)
  • 70. Explainable NLP Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018) Fine-grained explanations are in the form of: • texts in a real- world dataset; • Numerical scores • NLP Overview of Explanation in Different AI Fields (5)
  • 71. LIME for NLP Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144 Explainable NLP Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018) Fine-grained explanations are in the form of: • texts in a real- world dataset; • Numerical scores • NLP Overview of Explanation in Different AI Fields (5)
  • 72. LIME for NLP Marco Túlio Ribeiro, Sameer Singh, Carlos Guestrin: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016: 1135-1144 Explainable NLP Hui Liu, Qingyu Yin, William Yang Wang: Towards Explainable NLP: A Generative Explanation Framework for Text Classification. CoRR abs/1811.00196 (2018) Fine-grained explanations are in the form of: • texts in a real- world dataset; • Numerical scores Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush: Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models. IEEE Trans. Vis. Comput. Graph. 25(1): 353-363 (2019) NLP Debugger Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, Alexander M. Rush: LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. IEEE Trans. Vis. Comput. Graph. 24(1): 667-676 (2018) • NLP Overview of Explanation in Different AI Fields (5)
  • 73. ●Planning and Scheduling XAI Plan Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner Decisions. CoRR abs/1810.06338 (2018) Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner Decisions. CoRR abs/1810.06338 (2018) • Planning and Scheduling Overview of Explanation in Different AI Fields (6)
  • 74. ●Planning and Scheduling XAI Plan Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner Decisions. CoRR abs/1810.06338 (2018) Human-in-the-loop Planning Maria Fox, Derek Long, Daniele Magazzeni: Explainable Planning. CoRR abs/1709.10256 (2017) Rita Borgo, Michael Cashmore, Daniele Magazzeni: Towards Providing Explanations for AI Planner Decisions. CoRR abs/1810.06338 (2018) (Manual) Plan Comparison • Planning and Scheduling Overview of Explanation in Different AI Fields (6)
  • 75. Narration of Autonomous Robot Experience Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous robot experience. In IJCAI, pages 862–868. AAAI Press, 2016. Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall Symposium: Dialog with Robots, Vol. 61. 62. • Robotics Overview of Explanation in Different AI Fields (7)
  • 76. Narration of Autonomous Robot Experience Stephanie Rosenthal, Sai P Selvaraj, and Manuela Veloso. Verbalization: Narration of autonomous robot experience. In IJCAI, pages 862–868. AAAI Press, 2016. From Decision Tree to human-friendly information Raymond Ka-Man Sheh: "Why Did You Do That?" Explainable Intelligent Robots. AAAI Workshops 2017 Daniel J Brooks et al. 2010. Towards State Summarization for Autonomous Robots.. In AAAI Fall Symposium: Dialog with Robots, Vol. 61. 62. Overview of Explanation in Different AI Fields (7) • Robotics
  • 77. Probabilistic Graphical Models Daphne Koller, Nir Friedman: Probabilistic Graphical Models - Principles and Techniques. MIT Press 2009, ISBN 978-0-262-01319-2, pp. I-XXXV, 1-1231 • Reasoning under Uncertainty Overview of Explanation in Different AI Fields (8)
  • 78. Explainable Machine Learning (from a Machine Learning Perspective) 78
  • 79. Achieving Explainable AI Approach 1: Post-hoc explain a given AI model ● Individual prediction explanations in terms of input features, influential examples, concepts, local decision rules ● Global prediction explanations in terms of entire model in terms of partial dependence plots, global feature importance, global decision rules Approach 2: Build an interpretable model ● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive Models (GAMs) 79
  • 81. Achieving Explainable AI Approach 1: Post-hoc explain a given AI model ● Individual prediction explanations in terms of input features, influential examples, concepts, local decision rules ● Global prediction explanations in terms of entire model in terms of partial dependence plots, global feature importance, global decision rules Approach 2: Build an interpretable model ● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive Models (GAMs) 81
  • 82. Top label: “clog” Why did the network label this image as “clog”? 82
  • 83. Top label: “fireboat” Why did the network label this image as “fireboat”? 83
  • 84. Credit Line Increase Fair lending laws [ECOA, FCRA] require credit decisions to be explainable Bank Credit Lending Model Why? Why not? How? ? Request Denied Query AI System Credit Lending Score = 0.3 Credit Lending in a black-box ML world
  • 85. Attribute a model’s prediction on an input to features of the input Examples: ● Attribute an object recognition network’s prediction to its pixels ● Attribute a text sentiment network’s prediction to individual words ● Attribute a lending model’s prediction to its features A reductive formulation of “why this prediction” but surprisingly useful The Attribution Problem
  • 86. Application of Attributions ● Debugging model predictions E.g., Attribution an image misclassification to the pixels responsible for it ● Generating an explanation for the end-user E.g., Expose attributions for a lending prediction to the end-user ● Analyzing model robustness E.g., Craft adversarial examples using weaknesses surfaced by attributions ● Extract rules from the model E.g., Combine attribution to craft rules (pharmacophores) capturing prediction logic of a drug screening network 86
  • 87. Next few slides We will cover the following attribution methods** ● Ablations ● Gradient based methods (specific to differentiable models) ● Score Backpropagation based methods (specific to NNs) We will also discuss game theory (Shapley value) in attributions **Not a complete list! See Ancona et al. [ICML 2019], Guidotti et al. [arxiv 2018] for a comprehensive survey 87
  • 88. Ablations Drop each feature and attribute the change in prediction to that feature Pros: ● Simple and intuitive to interpret Cons: ● Unrealistic inputs ● Improper accounting of interactive features ● Can be computationally expensive 88
  • 89. Feature*Gradient Attribution to a feature is feature value times gradient, i.e., xi* 𝜕y/𝜕 xi ● Gradient captures sensitivity of output w.r.t. feature ● Equivalent to Feature*Coefficient for linear models ○ First-order Taylor approximation of non-linear models ● Popularized by SaliencyMaps [NIPS 2013], Baehrens et al. [JMLR 2010] 89 Gradients in the vicinity of the input seem like noise?
  • 90. Local linear approximations can be too local 90 score “fireboat-ness” of image Interesting gradients uninteresting gradients (saturation) 1.0 0.0
  • 91. Score Back-Propagation based Methods Re-distribute the prediction score through the neurons in the network ● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014] Easy case: Output of a neuron is a linear function of previous neurons (i.e., ni = ⅀ wij * nj) e.g., the logit neuron ● Re-distribute the contribution in proportion to the coefficients wij 91 Image credit heatmapping.org
  • 92. Score Back-Propagation based Methods Re-distribute the prediction score through the neurons in the network ● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014] Tricky case: Output of a neuron is a non-linear function, e.g., ReLU, Sigmoid, etc. ● Guided BackProp: Only consider ReLUs that are on (linear regime), and which contribute positively ● LRP: Use first-order Taylor decomposition to linearize activation function ● DeepLift: Distribute activation difference relative a reference point in proportion to edge weights 92 Image credit heatmapping.org
  • 93. Score Back-Propagation based Methods Re-distribute the prediction score through the neurons in the network ● LRP [JMLR 2017], DeepLift [ICML 2017], Guided BackProp [ICLR 2014] Pros: ● Conceptually simple ● Methods have been empirically validated to yield sensible result Cons: ● Hard to implement, requires instrumenting the model ● Often breaks implementation invariance Think: F(x, y, z) = x * y *z and G(x, y, z) = x * (y * z) Image credit heatmapping.org
  • 94. Baselines and additivity ● When we decompose the score via backpropagation, we imply a normative alternative called a baseline ○ “Why Pr(fireboat) = 0.91 [instead of 0.00]” ● Common choice is an informationless input for the model ○ E.g., Black image for image models ○ E.g., Empty text or zero embedding vector for text models ● Additive attributions explain F(input) - F(baseline) in terms of input features
  • 95. score intensity Interesting gradients uninteresting gradients (saturation) 1.0 0.0 Baseline … scaled inputs ... … gradients of scaled inputs …. Input Another approach: gradients at many points
  • 96. IG(input, base) ::= (input - base) * ∫0 -1▽F(𝛂*input + (1-𝛂)*base) d𝛂 Original image Integrated Gradients Integrated Gradients [ICML 2017] Integrate the gradients along a straight-line path from baseline to input
  • 98. Original image “Clog” Why is this image labeled as “clog”?
  • 99. Original image Integrated Gradients (for label “clog”) “Clog” Why is this image labeled as “clog”?
  • 100. Detecting an architecture bug ● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site ● Finding: Some atoms had identical attributions despite different connectivity
  • 101. ● Deep network [Kearns, 2016] predicts if a molecule binds to certain DNA site ● Finding: Some atoms had identical attributions despite different connectivity Detecting an architecture bug ● Bug: The architecture had a bug due to which the convolved bond features did not affect the prediction!
  • 102. ● Deep network predicts various diseases from chest x-rays Original image Integrated gradients (for top label) Detecting a data issue
  • 103. ● Deep network predicts various diseases from chest x-rays ● Finding: Attributions fell on radiologist’s markings (rather than the pathology) Original image Integrated gradients (for top label) Detecting a data issue
  • 104. Cooperative game theory in attributions 104
  • 105. Classic result in game theory on distributing gain in a coalition game ● Coalition Games ○ Players collaborating to generate some gain (think: revenue) ○ Set function v(S) determining the gain for any subset S of players Shapley Value [Annals of Mathematical studies,1953]
  • 106. Classic result in game theory on distributing gain in a coalition game ● Coalition Games ○ Players collaborating to generate some gain (think: revenue) ○ Set function v(S) determining the gain for any subset S of players ● Shapley Values are a fair way to attribute the total gain to the players based on their contributions ○ Concept: Marginal contribution of a player to a subset of other players (v(S U {i}) - v(S)) ○ Shapley value for a player is a specific weighted aggregation of its marginal over all possible subsets of other players Shapley Value for player i = ⅀S⊆N w(S) * (v(S U {i}) - v(S)) (where w(S) = N! / |S|! (N - |S| -1)!) Shapley Value [Annals of Mathematical studies, 1953]
  • 107. Shapley values are unique under four simple axioms ● Dummy: If a player never contributes to the game then it must receive zero attribution ● Efficiency: Attributions must add to the total gain ● Symmetry: Symmetric players must receive equal attribution ● Linearity: Attribution for the (weighted) sum of two games must be the same as the (weighted) sum of the attributions for each of the games Shapley Value Justification
  • 108. SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009] ● Define a coalition game for each model input X ○ Players are the features in the input ○ Gain is the model prediction (output), i.e., gain = F(X) ● Feature attributions are the Shapley values of this game Shapley Values for Explaining ML models
  • 109. SHAP [NeurIPS 2018], QII [S&P 2016], Strumbelj & Konenko [JMLR 2009] ● Define a coalition game for each model input X ○ Players are the features in the input ○ Gain is the model prediction (output), i.e., gain = F(X) ● Feature attributions are the Shapley values of this game Challenge: Shapley values require the gain to be defined for all subsets of players ● What is the prediction when some players (features) are absent? i.e., what is F(x_1, <absent>, x_3, …, <absent>)? Shapley Values for Explaining ML models
  • 110. Key Idea: Take the expected prediction when the (absent) feature is sampled from a certain distribution. Different approaches choose different distributions ● [SHAP, NIPS 2018] Use conditional distribution w.r.t. the present features ● [QII, S&P 2016] Use marginal distribution ● [Strumbelj et al., JMLR 2009] Use uniform distribution Modeling Feature Absence Preprint: The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory
  • 111. Exact Shapley value computation is exponential in the number of features ● Shapley values can be expressed as an expectation of marginals 𝜙(i) = ES ~ D [marginal(S, i)] ● Sampling-based methods can be used to approximate the expectation ● See: “Computational Aspects of Cooperative Game Theory”, Chalkiadakis et al. 2011 ● The method is still computationally infeasible for models with hundreds of features, e.g., image models Computing Shapley Values
  • 112. ● Values of Non-Atomic Games (1974): Aumann and Shapley extend their method → players can contribute fractionally ● Aumann-Shapley values calculated by integrating along a straight-line path… same as Integrated Gradients! ● IG through a game theory lens: continuous game, feature absence is modeled by replacement with a baseline value ● Axiomatically justified as a result: ○ Integrated Gradients is the unique path-integral method satisfying: Sensitivity, Insensitivity, Linearity preservation, Implementation invariance, Completeness, and Symmetry Non-atomic Games: Aumann-Shapley Values and IG
  • 113. Baselines (or Norms) are essential to explanations [Kahneman-Miller 86] ● E.g., A man suffers from indigestion. Doctor blames it to a stomach ulcer. Wife blames it on eating turnips. Both are correct relative to their baselines. ● The baseline may also be an important analysis knob. Attributions are contrastive, whether we think about it or not. Lesson learned: baselines are important
  • 114. Some limitations and caveats for attributions
  • 115. Some things that are missing: ● Feature interactions (ignored or averaged out) ● What training examples influenced the prediction (training agnostic) ● Global properties of the model (prediction-specific) An instance where attributions are useless: ● A model that predicts TRUE when there are even number of black pixels and FALSE otherwise Attributions don’t explain everything
  • 116. Attributions are for human consumption Naive scaling of attributions from 0 to 255 Attributions have a large range and long tail across pixels After clipping attributions at 99% to reduce range ● Humans interpret attributions and generate insights ○ Doctor maps attributions for x-rays to pathologies ● Visualization matters as much as the attribution technique
  • 117. Other individual prediction explanation methods
  • 118. Local Interpretable Model-agnostic Explanations (Ribeiro et al. KDD 2016) 118 Figure credit: Anchors: High-Precision Model-Agnostic Explanations. Ribeiro et al. AAAI 2018 Figure credit: Ribeiro et al. KDD 2016
  • 119. Anchors 119 Figure credit: Anchors: High-Precision Model-Agnostic Explanations. Ribeiro et al. AAAI 2018
  • 120. Influence functions ● Trace a model’s prediction through the learning algorithm and back to its training data ● Training points “responsible” for a given prediction 120 Figure credit: Understanding Black-box Predictions via Influence Functions. Koh and Liang. ICML 2017
  • 121. Example based Explanations 121 ● Prototypes: Representative of all the training data. ● Criticisms: Data instance that is not well represented by the set of prototypes. Figure credit: Examples are not Enough, Learn to Criticize! Criticism for Interpretability. Kim, Khanna and Koyejo. NIPS 2016 Learned prototypes and criticisms from Imagenet dataset (two types of dog breeds)
  • 123. Global Explanations Methods ● Partial Dependence Plot: Shows the marginal effect one or two features have on the predicted outcome of a machine learning model 123
  • 124. Global Explanations Methods ● Permutations: The importance of a feature is the increase in the prediction error of the model after we permuted the feature’s values, which breaks the relationship between the feature and the true outcome. 124
  • 125. Achieving Explainable AI Approach 1: Post-hoc explain a given AI model ● Individual prediction explanations in terms of input features, influential examples, concepts, local decision rules ● Global prediction explanations in terms of entire model in terms of partial dependence plots, global feature importance, global decision rules Approach 2: Build an interpretable model ● Logistic regression, Decision trees, Decision lists and sets, Generalized Additive Models (GAMs) 125
  • 126. Decision Trees 126 Is the person fit? Age < 30 ? Eats a lot of pizzas? Exercises in the morning? Unfit UnfitFit Fit Yes No Yes Yes No No
  • 127. Decision Set 127 Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach, Leskovec
  • 129. Decision List 129 Figure credit: Interpretable Decision Sets: A Joint Framework for Description and Prediction, Lakkaraju, Bach, Leskovec
  • 130. Falling Rule List A falling rule list is an ordered list of if-then rules (falling rule lists are a type of decision list), such that the estimated probability of success decreases monotonically down the list. Thus, a falling rule list directly contains the decision- making process, whereby the most at-risk observations are classified first, then the second set, and so on. 130
  • 131. Box Drawings for Rare Classes 131 Figure credit: Box Drawings for Learning with Imbalanced. Data Siong Thye Goh and Cynthia Rudin
  • 132. Supersparse Linear Integer Models for Optimized Medical Scoring Systems Figure credit: Supersparse Linear Integer Models for Optimized Medical Scoring Systems. Berk Ustun and Cynthia Rudin 132
  • 133. K- Nearest Neighbors 133 Explanation in terms of nearest training data points responsible for the decision
  • 134. GLMs and GAMs 134 Intelligible Models for Classification and Regression. Lou, Caruana and Gehrke KDD 2012 Accurate Intelligible Models with Pairwise Interactions. Lou, Caruana, Gehrke and Hooker. KDD 2013
  • 135. Explainable Machine Learning (from a Knowledge Graph Perspective) 135 Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 136. August 28th, 2019 Tutorial on Explainable AI 136 Knowledge Graph (1) Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 137. August 28th, 2019 Tutorial on Explainable AI 137 Knowledge Graph (2) Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 138. August 28th, 2019 Tutorial on Explainable AI 138 Knowledge Graph Construction Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 139. https://stats.stackexchange.com/questions/230581/decision -tree-too-large-to-interpret Knowledge Graph in Machine Learning (1) Augmenting (input) features with more semantics such as knowledge graph embeddings / entities Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 140. https://stats.stackexchange.com/questions/230581/decision -tree-too-large-to-interpret Knowledge Graph in Machine Learning (2) Augmenting machine learning models with more semantics such as knowledge graphs entities Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 141. Training Data Input (unlabeled image) Neurons respond to simple shapes Neurons respond to more complex structures Neurons respond to highly complex, abstract concepts 1st Layer 2nd Layer nth Layer Low-level features to high-level features Knowledge Graph in Machine Learning (3) Augmenting (intermediate) features with more semantics such as knowledge graph embeddings / entities Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 142. Training Data Input (unlabeled image) Neurons respond to simple shapes Neurons respond to more complex structures Neurons respond to highly complex, abstract concepts 1st Layer 2nd Layer nth Layer Low-level features to high-level features Knowledge Graph in Machine Learning (4) Augmenting (input, intermediate) features – output relationship with more semantics to capture causal relationship Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 143. Knowledge Graph in Machine Learning (5) Description 1: This is an orange train accident Description 2: This is a train accident between two speed merchant trains of characteristics X43-B and Y33-C in a dry environment Description 3: This is a public transportation accident Augmenting models with semantics to support personalized explanation Freddy Lécué: On the role of knowledge graphs in explainable AI. Semantic Web 11(1): 41-51 (2020)
  • 144. Knowledge Graph in Machine Learning (6) “How to explain transfer learning with appropriate knowledge representation? Proceedings of the Sixteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2018) Knowledge-Based Transfer Learning Explanation Huajun Chen College of Computer Science, Zhejiang University, China Alibaba-Zhejian University Frontier Technology Research Center Jiaoyan Chen Department of Computer Science University of Oxford, UK Freddy Lecue INRIA, France Accenture Labs, Ireland Jeff Z. Pan Department of Computer Science University of Aberdeen, UK Ian Horrocks Department of Computer Science University of Oxford, UK Augmenting input features and domains with semantics to support interpretable transfer learning
  • 146. State of the Art Machine Learning Applied to Critical Systems
  • 148. Lumbermill - .59 Object (Obstacle) Detection Task State- of-the-art ML Result
  • 149. Lumbermill - .59 Object (Obstacle) Detection Task State- of-the-art ML Result Boulder - .09 Railway - .11
  • 150. State of the Art XAI Applied to Critical Systems
  • 151. Lumbermill - .59 Object (Obstacle) Detection Task State-of-the-art XAI Result
  • 152. Unfortunately, this is of NO use for a human behind the system
  • 153. Let’s stay back Why this Explanation? (meta explanation)
  • 154. Object (Obstacle) Detection Task State- of-the-art Result Lumbermill - .59 After Human Reasoning…
  • 155. Lumbermill - .59 What is missing?
  • 156. Lumbermill - .59 Boulder - .09 Railway - .11 Context matters
  • 157. • Hardware: High performance, scalable, generic (to different FGPA family) & portable CNN dedicated programmable processor implemented on an FPGA for real-time embedded inference • Software: Knowledge graph extension of object detection Transitionin g This is an Obstacle: Boulder obstructing the train: XG142-R on Rail_Track from City: Cannes to City: Marseille at Location: Tunnel VIX due to Landslide
  • 158. Tunnel - .74 Boulder - .81 Railway - .90 Rail Trac k Boulder Trai n operatin g on Obstacle Tunne l obstructing Landslide
  • 159. Freddy Lécué, Jiaoyan Chen, Jeff Z. Pan, Huajun Chen: Augmenting Transfer Learning with Semantic Reasoning. IJCAI 2019: 1779-1785 Freddy Lécué, Tanguy Pommellet: Feeding Machine Learning with Knowledge Graphs for Explainable Object Detection. ISWC Satellites 2019: 277-280 Freddy Lécué, Baptiste Abeloos, Jonathan Anctil, Manuel Bergeron, Damien Dalla- Rosa, Simon Corbeil-Letourneau, Florian Martet, Tanguy Pommellet, Laura Salvan, Simon Veilleux, Maryam Ziaeefard: Thales XAI Platform: Adaptable Explanation of Machine Learning Systems - A Knowledge Graphs Perspective. ISWC Satellites 2019: 315-316 Jiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian Horrocks, Huajun Chen: Knowledge-Based Transfer Learning Explanation. KR 2018: 349-358 Knowledge Graph in Machine Learning - An Implementation
  • 160. XAI Case Studies in Industry: Applications, Lessons Learned, and Research Challenges 160
  • 161. Challenge: Object detection is usually performed from a large portfolio of Artificial Neural Networks (ANNs) architectures trained on large amount of labelled data. Explaining object detections is rather difficult due to the high complexity of the most accurate ANNs. AI Technology: Integration of AI related technologies i.e., Machine Learning (Deep Learning / CNNs), and knowledge graphs / linked open data. XAI Technology: Knowledge graphs and Artificial Neural Networks Explainable Boosted Object Detection – Industry Agnostic
  • 162. Context ● Explanation in Machine Learning systems has been identified to be the one asset to have for large scale deployment of Artificial Intelligence (AI) in critical systems ● Explanations could be example-based (who is similar), features-based (what is driving decision), or even counterfactual (what-if scenario) to potentially action on an AI system; they could be represented in many different ways e.g., textual, graphical, visual Goal ● All representations serve different means, purpose and operators. We designed the first-of-its-kind XAI platform for critical systems i.e., the Thales Explainable AI Platform which aims at serving explanations through various forms Approach: Model-Agnostic ● [AI:ML] Grad-Cam, Shapley, Counter-factual, Knowledge graph Thales XAI Platform
  • 163.
  • 164. Challenge: Designing Artificial Neural Network architectures requires lots of experimentation (i.e., training phases) and parameters tuning (optimization strategy, learning rate, number of layers…) to reach optimal and robust machine learning models. AI Technology: Artificial Neural Network XAI Technology: Artificial Neural Network, 3D Modeling and Simulation Platform For AI Debugging Artificial Neural Networks – Industry Agnostic Zetane.com
  • 165.
  • 166. Challenge: Public transportation is getting more and more self-driving vehicles. Even if trains are getting more and more autonomous, the human stays in the loop for critical decision, for instance in case of obstacles. In case of obstacles trains are required to provide recommendation of action i.e., go on or go back to station. In such a case the human is required to validate the recommendation through an explanation exposed by the train or machine. AI Technology: Integration of AI related technologies i.e., Machine Learning (Deep Learning / CNNs), and semantic segmentation. XAI Technology: Deep learning and Epistemic uncertainty Obstacle Identification Certification (Trust) - Transportation
  • 167. Challenge: Predicting and explaining aircraft engine performance AI Technology: Artificial Neural Networks XAI Technology: Shapely Values Explaining Flight Performance- Transportation
  • 168. Challenge: Globally 323,454 flights are delayed every year. Airline-caused delays totaled 20.2 million minutes last year, generating huge cost for the company. Existing in-house technique reaches 53% accuracy for predicting flight delay, does not provide any time estimation (in minutes as opposed to True/False) and is unable to capture the underlying reasons (explanation). AI Technology: Integration of AI related technologies i.e., Machine Learning (Deep Learning / Recurrent neural Network), Reasoning (through semantics-augmented case- based reasoning) and Natural Language Processing for building a robust model which can (1) predict flight delays in minutes, (2) explain delays by comparing with historical cases. XAI Technology: Knowledge graph embedded Sequence Learning using LSTMsJiaoyan Chen, Freddy Lécué, Jeff Z. Pan, Ian Horrocks, Huajun Chen: Knowledge-Based Transfer Learning Explanation. KR 2018: 349-358 Nicholas McCarthy, Mohammad Karzand, Freddy Lecue: Amsterdam to Dublin Eventually Delayed? LSTM and Transfer Learning for Predicting Delays of Low Cost Airlines: AAAI 2019 Explainable On-Time Performance - Transportation
  • 169. Challenge: Accenture is managing every year more than 80,000 opportunities and 35,000 contracts with an expected revenue of $34.1 billion. Revenue expectation does not meet estimation due to the complexity and risks of critical contracts. This is, in part, due to the (1) large volume of projects to assess and control, and (2) the existing non- systematic assessment process. AI Technology: Integration of AI technologies i.e., Machine Learning, Reasoning, Natural Language Processing for building a robust model which can (1) predict revenue loss, (2) recommend corrective actions, and (3) explain why such actions might have a positive impact. XAI Technology: Knowledge graph embedded Random Forrest Copyright © 2017 Accenture. All rights reserved. Jiewen Wu, Freddy Lécué, Christophe Guéret, Jer Hayes, Sara van de Moosdijk, Gemma Gallagher, Peter McCanney, Eugene Eichelberger: Personalizing Actions in Context for Risk Management Using Semantic Web Technologies. International Semantic Web Conference (2) 2017: 367-383 Explainable Risk Management - Finance
  • 170. Challenge: Predicting and explaining abnormally employee expenses (as high accommodation price in 1000+ cities). AI Technology: Various techniques have been matured over the last two decades to achieve excellent results. However most methods address the problem from a statistic and pure data-centric angle, which in turn limit any interpretation. We elaborated a web application running live with real data from (i) travel and expenses from Accenture, (ii) external data from third party such as Google Knowledge Graph, DBPedia (relational DataBase version of Wikipedia) and social events from Eventful, for explaining abnormalities. XAI Technology: Knowledge graph embedded Ensemble Learning Freddy Lécué, Jiewen Wu: Explaining and predicting abnormal expenses at large scale using knowledge graph based reasoning. J. Web Sem. 44: 89-103 (2017) Explainable Anomaly Detection – Finance (Compliance)
  • 171. Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, Freddy Lécué: Interpretable Credit Application Predictions With Counterfactual Explanations. FEAP-AI4fin workshop, NeurIPS, 2018. Counterfactual Explanations for Credit Decisions (3) - Finance
  • 172. Challenge: Explaining medical condition relapse in the context of oncology. AI Technology: Relational learning XAI Technology: Knowledge graphs and Artificial Neural Networks Explanation of Medical Condition Relapse – Health Knowledge graph parts explaining medical condition relapse
  • 173. Case Study: Talent Platform “Diversity Insights and Fairness-Aware Ranking” Sahin Cem Geyik, Krishnaram Kenthapadi 173
  • 175. Insights to Identify Diverse Talent Pools Representative Talent Search Results Diversity Learning Curriculum “Diversity by Design” in LinkedIn’s Talent Solutions
  • 179. Inclusive Job Descriptions / Recruiter Outreach
  • 180. Representative Ranking for Talent Search S. C. Geyik, S. Ambler, K. Kenthapadi, Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search, KDD’19. [Microsoft’s AI/ML conference (MLADS’18). Distinguished Contribution Award] Building Representative Talent Search at LinkedIn (LinkedIn engineering blog)
  • 181. Intuition for Measuring and Achieving Representativeness Ideal: Top ranked results should follow a desired distribution on gender/age/… E.g., same distribution as the underlying talent pool Inspired by “Equal Opportunity” definition [Hardt et al, NIPS’16] Defined measures (skew, divergence) based on this intuition
  • 182. Desired Proportions within the Attribute of Interest Compute the proportions of the values of the attribute (e.g., gender, gender-age combination) amongst the set of qualified candidates ● “Qualified candidates” = Set of candidates that match the search query criteria ● Retrieved by LinkedIn’s Galene search engine Desired proportions could also be obtained based on legal mandate / voluntary commitment
  • 183. Fairness-aware Reranking Algorithm (Simplified) Partition the set of potential candidates into different buckets for each attribute value Rank the candidates in each bucket according to the scores assigned by the machine-learned model Merge the ranked lists, balancing the representation requirements and the selection of highest scored candidates Representation requirement: Desired distribution on gender/age/… Algorithmic variants based on how we achieve this balance
  • 184. Validating Our Approach Gender Representativeness ● Over 95% of all searches are representative compared to the qualified population of the search Business Metrics ● A/B test over LinkedIn Recruiter users for two weeks ● No significant change in business metrics (e.g., # InMails sent or accepted) Ramped to 100% of LinkedIn Recruiter users worldwide
  • 185. Lessons learned • Post-processing approach desirable • Model agnostic • Scalable across different model choices for our application • Acts as a “fail-safe” • Robust to application-specific business logic • Easier to incorporate as part of existing systems • Build a stand-alone service or component for post-processing • No significant modifications to the existing components • Complementary to efforts to reduce bias from training data & during model training • Collaboration/consensus across key stakeholders
  • 186. Acknowledgements LinkedIn Talent Solutions Diversity team, Hire & Careers AI team, Anti-abuse AI team, Data Science Applied Research team Special thanks to Deepak Agarwal, Parvez Ahammad, Stuart Ambler, Kinjal Basu, Jenelle Bray, Erik Buchanan, Bee-Chung Chen, Patrick Cheung, Gil Cottle, Cyrus DiCiccio, Patrick Driscoll, Carlos Faham, Nadia Fawaz, Priyanka Gariba, Meg Garlinghouse, Gurwinder Gulati, Rob Hallman, Sara Harrington, Joshua Hartman, Daniel Hewlett, Nicolas Kim, Rachel Kumar, Nicole Li, Heloise Logan, Stephen Lynch, Divyakumar Menghani, Varun Mithal, Arashpreet Singh Mor, Tanvi Motwani, Preetam Nandy, Lei Ni, Nitin Panjwani, Igor Perisic, Hema Raghavan, Romer Rosales, Guillaume Saint-Jacques, Badrul Sarwar, Amir Sepehri, Arun Swami, Ram Swaminathan, Grace Tang, Ketan Thakkar, Sriram Vasudevan, Janardhanan Vembunarayanan, James Verbus, Xin Wang, Hinkmond Wong, Ya Xu, Lin Yang, Yang Yang, Chenhui Zhai, Liang Zhang, Yani Zhang
  • 187. Engineering for Fairness in AI Lifecycle Problem Formation Dataset Construction Algorithm Selection Training Process Testing Process Deployment Feedback Is an algorithm an ethical solution to our problem? Does our data include enough minority samples? Are there missing/biased features? Do we need to apply debiasing algorithms to preprocess our data? Do we need to include fairness constraints in the function? Have we evaluated the model using relevant fairness metrics? Are we deploying our model on a population that we did not train/test on? Are there unequal effects across users? Does the model encourage feedback loops that can produce increasingly unfair outcomes? Credit: K. Browne & J. Draper
  • 188. Engineering for Fairness in AI Lifecycle S.Vasudevan, K. Kenthapadi, FairScale: A Scalable Framework for Measuring Fairness in AI Applications, 2019
  • 189. FairScale System Architecture [Vasudevan & Kenthapadi, 2019] • Flexibility of Use (Platform agnostic) • Ad-hoc exploratory analyses • Deployment in offline workflows • Integration with ML Frameworks • Scalability • Diverse fairness metrics • Conventional fairness metrics • Benefit metrics • Statistical tests
  • 190. Fairness-aware experimentation [Saint-Jacques and Sepehri, KDD’19 Social Impact Workshop] Imagine LinkedIn has 10 members. Each of them has 1 session a day. A new product increases sessions by +1 session per member on average. Both of these are +1 session / member on average! One is much more unequal than the other. We want to catch that.
  • 191. Case Study: Talent Search Varun Mithal, Girish Kathalagiri, Sahin Cem Geyik 191
  • 192. LinkedIn Recruiter ● Recruiter Searches for Candidates ○ Standardized and free-text search criteria ● Retrieval and Ranking ○ Filter candidates using the criteria ○ Rank candidates in multiple levels using ML models 192
  • 193. Modeling Approaches ● Pairwise XGBoost ● GLMix ● DNNs via TensorFlow ● Optimization Criteria: inMail Accepts ○ Positive: inMail sent by recruiter, and positively responded by candidate ■ Mutual interest between the recruiter and the candidate 193
  • 194. Feature Importance in XGBoost 194
  • 195. How We Utilize Feature Importances for GBDT ● Understanding feature digressions ○ Which a feature that was impactful no longer is? ○ Should we debug feature generation? ● Introducing new features in bulk and identifying effective ones ○ An activity feature for last 3 hours, 6 hours, 12 hours, 24 hours introduced (costly to compute) ○ Should we keep all such features? ● Separating the factors for that caused an improvement ○ Did an improvement come from a new feature, or a new labeling strategy, data source? ○ Did the ordering between features change? ● Shortcoming: A global view, not case by case 195
  • 196. GLMix Models ● Generalized Linear Mixed Models ○ Global: Linear Model ○ Per-contract: Linear Model ○ Per-recruiter: Linear Model ● Lots of parameters overall ○ For a specific recruiter or contract the weights can be summed up ● Inherently explainable ○ Contribution of a feature is “weight x feature value” ○ Can be examined in a case-by-case manner as well 196
  • 197. TensorFlow Models in Recruiter and Explaining Them ● We utilize the Integrated Gradients [ICML 2017] method ● How do we determine the baseline example? ○ Every query creates its own feature values for the same candidate ○ Query match features, time-based features ○ Recruiter affinity, and candidate affinity features ○ A candidate would be scored differently by each query ○ Cannot recommend a “Software Engineer” to a search for a “Forensic Chemist” ○ There is no globally neutral example for comparison! 197
  • 198. Query-Specific Baseline Selection ● For each query: ○ Score examples by the TF model ○ Rank examples ○ Choose one example as the baseline ○ Compare others to the baseline example ● How to choose the baseline example ○ Last candidate ○ Kth percentile in ranking ○ A random candidate ○ Request by user (answering a question like: “Why was I presented candidate x above candidate y?”) 198
  • 200. Example - Detailed 200 Feature Description Difference (1 vs 2) Contribution Feature………. Description………. -2.0476928 -2.144455602 Feature………. Description………. -2.3223877 1.903594618 Feature………. Description………. 0.11666667 0.2114946752 Feature………. Description………. -2.1442587 0.2060414469 Feature………. Description………. -14 0.1215354111 Feature………. Description………. 1 0.1000282466 Feature………. Description………. -92 -0.085286277 Feature………. Description………. 0.9333333 0.0568533262 Feature………. Description………. -1 -0.051796317 Feature………. Description………. -1 -0.050895940
  • 201. Pros & Cons ● Explains potentially very complex models ● Case-by-case analysis ○ Why do you think candidate x is a better match for my position? ○ Why do you think I am a better fit for this job? ○ Why am I being shown this ad? ○ Great for debugging real-time problems in production ● Global view is missing ○ Aggregate Contributions can be computed ○ Could be costly to compute 201
  • 202. Lessons Learned and Next Steps ● Global explanations vs. Case-by-case Explanations ○ Global gives an overview, better for making modeling decisions ○ Case-by-case could be more useful for the non-technical user, better for debugging ● Integrated gradients worked well for us ○ Complex models make it harder for developers to map improvement to effort ○ Use-case gave intuitive results, on top of completely describing score differences ● Next steps ○ Global explanations for Deep Models 202
  • 203. Case Study: Model Interpretation for Predictive Models in B2B Sales Predictions Jilei Yang, Wei Di, Songtao Guo 203
  • 204. Problem Setting ● Predictive models in B2B sales prediction ○ E.g.: random forest, gradient boosting, deep neural network, … ○ High accuracy, low interpretability ● Global feature importance → Individual feature reasoning 204
  • 206. Revisiting LIME ● Given a target sample 𝑥 𝑘, approximate its prediction 𝑝𝑟𝑒𝑑(𝑥 𝑘) by building a sample-specific linear model: 𝑝𝑟𝑒𝑑(𝑋) ≈ 𝛽 𝑘1 𝑋1 + 𝛽 𝑘2 𝑋2 + …, 𝑋 ∈ 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟(𝑥 𝑘) ● E.g., for company CompanyX: 0.76 ≈ 1.82 ∗ 0.17 + 1.61 ∗ 0.11+… 206
  • 208. Piecewise Linear Regression Motivation: Separate top positive feature influencers and top negative feature influencers 208
  • 209. Impact of Piecewise Approach ● Target sample 𝑥 𝑘=(𝑥 𝑘1, 𝑥 𝑘2, ⋯) ● Top feature contributor ○ LIME: large magnitude of 𝛽 𝑘𝑗 ⋅ 𝑥 𝑘𝑗 ○ xLIME: large magnitude of 𝛽 𝑘𝑗 − ⋅ 𝑥 𝑘𝑗 ● Top positive feature influencer ○ LIME: large magnitude of 𝛽 𝑘𝑗 ○ xLIME: large magnitude of negative 𝛽 𝑘𝑗 − or positive 𝛽 𝑘𝑗 + ● Top negative feature influencer ○ LIME: large magnitude of 𝛽 𝑘𝑗 ○ xLIME: large magnitude of positive 𝛽 𝑘𝑗 − or negative 𝛽 𝑘𝑗 + 209
  • 210. Localized Stratified Sampling: Idea Method: Sampling based on empirical distribution around target value at each feature level 210
  • 211. Localized Stratified Sampling: Method ● Sampling based on empirical distribution around target value for each feature ● For target sample 𝑥 𝑘 = (𝑥 𝑘1 , 𝑥 𝑘2 , ⋯), sampling values of feature 𝑗 according to 𝑝𝑗 (𝑋𝑗) ⋅ 𝑁(𝑥 𝑘𝑗 , (𝛼 ⋅ 𝑠𝑗 )2) ○ 𝑝𝑗 (𝑋𝑗) : empirical distribution. ○ 𝑥 𝑘𝑗 : feature value in target sample. ○ 𝑠𝑗 : standard deviation. ○ 𝛼 : Interpretable range: tradeoff between interpretable coverage and local accuracy. ● In LIME, sampling according to 𝑁(𝑥𝑗 , 𝑠𝑗 2). 211 _
  • 213. LTS LCP (LinkedIn Career Page) Upsell ● A subset of churn data ○ Total Companies: ~ 19K ○ Company features: 117 ● Problem: Estimate whether there will be upsell given a set of features about the company’s utility from the product 213
  • 215. 215
  • 217. Key Takeaways ● Looking at the explanation as contributor vs. influencer features is useful ○ Contributor: Which features end-up in the current outcome case-by-case ○ Influencer: What needs to be done to improve likelihood, case-by-case ● xLIME aims to improve on LIME via: ○ Piecewise linear regression: More accurately describes local point, helps with finding correct influencers ○ Localized stratified sampling: More realistic set of local points ● Better captures the important features 217
  • 218. Case Study: Relevance Debugging and Explaining @ Daniel Qiu, Yucheng Qian 218
  • 221. What Could Go Wrong? 221
  • 233. Teams ● Search ● Feed ● Comments ● People you may know ● Jobs you may be interested in ● Notification 233
  • 234. Case Study: Building an Explainable AI Engine @ Luke Merrick 234
  • 235. All your data Any data warehouse Custom Models Fiddler Modeling Layer Explainable AI for everyone APIs, Dashboards, Reports, Trusted Insights Fiddler’s Explainable AI Engine Mission: Unlock Trust, Visibility and Insights by making AI Explainable in every enterprise
  • 236. Credit Line Increase Fair lending laws [ECOA, FCRA] require credit decisions to be explainable Bank Credit Lending Model Why? Why not? How? ? Request Denied Query AI System Credit Lending Score = 0.3 Example: Credit Lending in a black-box ML world
  • 237. How Can This Help… Customer Support Why was a customer loan rejected? Bias & Fairness How is my model doing across demographics? Lending LOB What variables should they validate with customers on “borderline” decisions? Explain individual predictions (using Shapley Values)
  • 238. How Can This Help… Customer Support Why was a customer loan rejected? Bias & Fairness How is my model doing across demographics? Lending LOB What variables should they validate with customers on “borderline” decisions? Explain individual predictions (using Shapley Values)
  • 239. How Can This Help… Customer Support Why was a customer loan rejected? Bias & Fairness How is my model doing across demographics? Lending LOB What variables should they validate with customers on “borderline” decisions? Explain individual predictions (using Shapley Values) Probe the model on counterfactuals
  • 240. How Can This Help… Customer Support Why was a customer loan rejected? Why was the credit card limit low? Why was this transaction marked as fraud? Integrating explanations
  • 241. How Can This Help… Global Explanations What are the primary feature drivers of the dataset on my model? Region Explanations How does my model perform on a certain slice? Where does the model not perform well? Is my model uniformly fair across slices? Slice & Explain
  • 242. Model Monitoring: Feature Drift Investigate Data Drift Impacting Model Performance Time slice Feature distribution for time slice relative to training distribution
  • 243. How Can This Help… Operations Why are there outliers in model predictions? What caused model performance to go awry? Data Science How can I improve my ML model? Where does it not do well? Model Monitoring: Outliers with Explanations Outlier Individual Explanations
  • 244. Some lessons learned at Fiddler ● Attributions are contrastive to their baselines ● Explaining explanations is important (e.g. good UI) ● In practice, we face engineering challenges as much as theoretical challenges 244
  • 245. Recap ● Part I: Introduction and Motivation ○ Motivation, Definitions & Properties ○ Evaluation Protocols & Metrics ● Part II: Explanation in AI (not only Machine Learning!) ○ From Machine Learning to Knowledge Representation and Reasoning and Beyond ● Part III: Explainable Machine Learning (from a Machine Learning Perspective) ● Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective) ● Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges 245
  • 246. Challenges & Tradeoffs 246 User PrivacyTransparency Fairness Performance ? ● Lack of standard interface for ML models makes pluggable explanations hard ● Explanation needs vary depending on the type of the user who needs it and also the problem at hand. ● The algorithm you employ for explanations might depend on the use-case, model type, data format, etc. ● There are trade-offs w.r.t. Explainability, Performance, Fairness, and Privacy.
  • 247. Explainability in ML: Broad Challenges Actionable explanations Balance between explanations & model secrecy Robustness of explanations to failure modes (Interaction between ML components) Application-specific challenges Conversational AI systems: contextual explanations Gradation of explanations Tools for explanations across AI lifecycle Pre & post-deployment for ML models Model developer vs. End user focused
  • 248. Thanks! Questions? ● Feedback most welcome :-) ○ freddy.lecue@inria.fr, krishna@fiddler.ai, sgeyik@linkedin.com, kenthk@amazon.com, vamithal@linkedin.com, ankur@fiddler.ai, luke@fiddler.ai, p.minervini@ucl.ac.uk, riccardo.guidotti@unipi.it ● Tutorial website: https://xaitutorial2020.github.io ● To try Fiddler, please send an email to info@fiddler.ai ● To try Thales XAI Platform , please send an email to freddy.lecue@thalesgroup.com 248https://xaitutorial2020.github.io 248