In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques. Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Artificial Intelligence (XAI). Current solutions – mostly too specific, and simply aimed at making ML easier to interpret – cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures. Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness. Accordingly, in this paper we (i) elicit and discuss the most significant issues affecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.
Innovate and Collaborate- Harnessing the Power of Open Source Software.pdf
Towards XMAS: eXplainability through Multi-Agent Systems
1. Towards XMAS:
eXplainability through Multi-Agent Systems
Giovanni Ciatto∗ Roberta Calegari∗ Andrea Omicini∗
Davide Calvaresi†
∗Dipartimento di Informatica – Scienza e Ingegneria (DISI)
Alma Mater Studiorum – Universit`a di Bologna
{giovanni.ciatto , roberta.calegari, andrea.omicini}@unibo.it
†University of Applied Sciences and Arts Western Switzerland
davide.calvaresi@hevs.ch
1st Workshop on Artificial Intelligence & Internet of Things
Rende, Italy – November 21, 2019
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 1 / 17
2. Motivation & Context
Next in Line. . .
1 Motivation & Context
2 State of the art
3 eXplainability through Multi-Agent Systems
4 Conclusions
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 1 / 17
3. Motivation & Context
Context
Some well known facts:
Pervasive adoption of AI- and ML-powered IoT solutions world-wide
for automation, monitoring, and decision support
⇒ Several activities are (partially?) delegated to intelligent machines
! even activities from critical domains: finance, healthcare, etc
Especially in ML, we let machines learn specific tasks from data
through the production of numeric predictors, a.k.a. black-boxes
instead of programming such tasks ourselves
Unfortunately, black-boxes tend to be inherently
opaque w.r.t. the knowledge they acquire from data [12]
sub-optimal in performance as they are trained to minimise errors
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 2 / 17
4. Motivation & Context
Motivation
Opaqueness of ML-based predictors brings several drawbacks [9, 12]:
difficulty in understanding what a black-box has learned from data
e.g. “snowy background” problem [16]
difficulty in spotting “bugs” in what a numeric predictor has learned
because such knowledge is not explicitly represented
several failures of ML-based systems reported so far
e.g. black people classified as gorillas [6]
e.g. wolves classified because of snowy background [16]
e.g. unfair decisions in automated legal systems [20]
lawmakers recognised citizens’ right to meaningful explanations [18]
about the logic behind automated decision making
e.g. in General Data Protection Regulation (GDPR) [8]
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 3 / 17
5. Motivation & Context
The problem with ML-based AI
Trustworthiness
How can we trust machines we do not fully control?
↓
Controllability
How can we control machines we do not fully understand?
↓
Understandability
How can we understand distributed, numeric representations of knowledge?
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 4 / 17
6. Motivation & Context
The problem with ML-based IoIT
Other issues, made evident by IoIT:
Lack of (full) automation
Training of ML predictors heavily depends on the experience of human
data scientists
Centralisation of data & computation
Datasets cannot be easily moved & training can hardly be distributed
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 5 / 17
7. State of the art
Next in Line. . .
1 Motivation & Context
2 State of the art
3 eXplainability through Multi-Agent Systems
4 Conclusions
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 5 / 17
8. State of the art
The eXplanable AI (XAI) approach [10] I
The XAI community is nowadays facing such understandability issues
Focus on techniques easing the interpretation of numeric predictors
a.k.a. “opening the black box”, or look into it [9]
From [12]
In particular, most efforts are devoted to:
specific sorts of tasks, e.g. classification and regression
specific sorts of data, e.g. images, text, or tables
specific sorts of predictors, e.g. neural networks, SVM
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 6 / 17
9. State of the art
The eXplanable AI (XAI) approach [10] II
Studying techniques such as saliency maps [5], feature importance [19],
sensitivity analysis [13], activation maximisation [22]
from [16]
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 7 / 17
10. State of the art
Symbolic vs Numeric AI
ML is strictly a subset of AI
Several approaches lay under the Symbolic AI umbrella
often employed in expert, decision-support, or recommendation systems
There, knowledge is represented through symbolic languages, in the
form of logic rules or facts
less prone to opacity issues
both machine- and human-interpretable
Main drawbacks:
less flexibility w.r.t. numeric approaches
symbolic knowledge is mostly handcrafted
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 8 / 17
11. State of the art
About Symbolic AI
Symbolic AI is largely employed in well established research areas, such as:
Logic Programming (LP) [2]
Studying how symbolic rules may be employed as a programming language
Multi Agents Systems (MAS) [21]
Studying complex systems composed by several autonomous and
interacting entities called agents, reasoning or planning through LP
Argumentation [11]
Studying how agents may debate with each others in spite of opposing or
contradictory points of view on some subject—or learn from each others
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 9 / 17
12. State of the art
Symbolic Knowledge Extraction (SKE)
Symbolic and numeric approaches to AI are not
competing anymore
conversely, they are complementary to each others
SKE is the bridge between the two worlds
Several works have been proposed into the literature
concerning SKE
describing methods to extract decision rules/trees from
black-boxes
most of which surveyed in [1, 9]
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 10 / 17
13. eXplainability through Multi-Agent Systems
Next in Line. . .
1 Motivation & Context
2 State of the art
3 eXplainability through Multi-Agent Systems
4 Conclusions
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 10 / 17
14. eXplainability through Multi-Agent Systems
Interpretation vs Explanation
Such terms are wrongly considered synonyms [12, 17]
We thus adopt the following conceptual framework:
interpretation
def
= the cognitive activity of binding symbols/numbers to
their meaning
explanation
def
= the social activity of easing someone’s interpretation
e.g. by providing examples, or background knowledge
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 11 / 17
15. eXplainability through Multi-Agent Systems
XMAS Vision
We re-interpret ML-based systems as MAS where:
SKE
ML
SKE
ML
Loan?
Debate
No
Why?
Debate
Income < 1.500 €
Example ForIncome<1500€ & Loan Debate
Permanent Job
Assuming several data-sets
exist
Agents wrap a black-box
trained on a data-set
Agents extract rules from
black-boxes
Debating protocols are
employed by agents to:
compute decisions
explain decisions
Perfect metaphor for IoIT
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 12 / 17
16. eXplainability through Multi-Agent Systems
XMAS Vision – Multiple Expected Advantages
Explanations are interactive in nature
Multiple agents ↔ multiple perspectives
similarly to ensemble techniques
Symbols are a lingua franca for knowledge (sharing)
predictions / explanations from different predictors can be combined
Symbolic, aggregated knowledge could be moved among agents
even when the original datasets cannot
→ thus improving distribution while preserving privacy
The future: agents teaching to each others, through explanations
by exchanging symbolic knowledge
→ thus improving automation in training
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 13 / 17
17. eXplainability through Multi-Agent Systems
Paper contribution
i∗ modelling of this
research line
describing the
foreseeable goals &
activities
. . . and their
dependencies
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 14 / 17
18. Conclusions
Next in Line. . .
1 Motivation & Context
2 State of the art
3 eXplainability through Multi-Agent Systems
4 Conclusions
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 14 / 17
19. Conclusions
Summing up
ML-powered AI is everywhere but it not the silver-bullet
Increasing demand of explanabilty for ML-based systems
XAI mostly focus on interpretability, a.k.a. opening the black-boxes
whereas explanabilty requires interaction
Idea: extract symbolic knowledge from black-boxes and use debates
to explain it
This is expected to bring several benefits, even beyond interpretability
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 15 / 17
20. Conclusions
Future Works
Comparison, assessment, and generalisation of SKE algorithms
development of software libraries for SKE
e.g. extending Sci-Kit Learn [14]
Technological integration of SKE with symbolic frameworks
e.g. the tuProlog engine [7]
Development, validation, and simulation of debating protocols
development of simulation facilities
e.g. extending the Alchemist meta-simulator [15]
development enabling infrastructures for real-world experiments
e.g. extending the TuSoW technology [4]
e.g. robust & trustworthy through Blockchain technologies [3]
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 16 / 17
21. Towards XMAS:
eXplainability through Multi-Agent Systems
Giovanni Ciatto∗ Roberta Calegari∗ Andrea Omicini∗
Davide Calvaresi†
∗Dipartimento di Informatica – Scienza e Ingegneria (DISI)
Alma Mater Studiorum – Universit`a di Bologna
{giovanni.ciatto , roberta.calegari, andrea.omicini}@unibo.it
†University of Applied Sciences and Arts Western Switzerland
davide.calvaresi@hevs.ch
1st Workshop on Artificial Intelligence & Internet of Things
Rende, Italy – November 21, 2019
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
22. Bibliography
References I
Robert Andrews, Joachim Diederich, and Alan B. Tickle.
Survey and critique of techniques for extracting rules from trained artificial neural
networks.
Knowledge-Based Systems, 8(6):373–389, December 1995.
Krzysztof R. Apt.
The logic programming paradigm and prolog.
CoRR, cs.PL/0107013, 2001.
Giovanni Ciatto, Stefano Mariani, and Andrea Omicini.
Blockchain for trustworthy coordination: A first study with Linda and Ethereum.
In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages
696–703, December 2018.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
23. Bibliography
References II
Giovanni Ciatto, Lorenzo Rizzato, Andrea Omicini, and Stefano Mariani.
Tusow: Tuple spaces for edge computing.
In The 28th International Conference on Computer Communications and Networks
(ICCCN 2019), Valencia, Spain, August 2019. IEEE.
R. Cong, J. Lei, H. Fu, M. Cheng, W. Lin, and Q. Huang.
Review of visual saliency detection with comprehensive information.
IEEE Transactions on Circuits and Systems for Video Technology,
29(10):2941–2959, Oct 2019.
Kate Crawford.
Artificial intelligence’s white guy problem.
The New York Times, 25, 2016.
Enrico Denti, Andrea Omicini, and Roberta Calegari.
tuProlog: Making Prolog ubiquitous.
ALP Newsletter, October 2013.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
24. Bibliography
References III
General Data Protection Regulation (GDPR).
Regulation (eu) 2016/679 of the european parliament and of the council of 27 april
2016 on the protection of natural persons with regard to the processing of personal
data and on the free movement of such data, and repealing directive 95/46/ec.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679.
Online; accessed on October 11, 2019.
Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca
Giannotti.
A survey of methods for explaining black box models.
CoRR, abs/1802.01933, 2018.
David Gunning.
Explainable artificial intelligence (XAI).
Funding Program DARPA-BAA-16-53, DARPA, 2016.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
25. Bibliography
References IV
Dionysios Kontarinis.
Debate in a multi-agent system : multiparty argumentation protocols.
PhD thesis, Universit´e Ren´e Descartes, Paris V, 2014.
https://tel.archives-ouvertes.fr/tel-01345797.
Zachary Chase Lipton.
The mythos of model interpretability.
CoRR, abs/1606.03490, 2016.
Julian D Olden and Donald A Jackson.
Illuminating the “black box”: a randomization approach for understanding variable
contributions in artificial neural networks.
Ecological Modelling, 154(1):135 – 150, 2002.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
26. Bibliography
References V
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python.
Journal of Machine Learning Research, 12:2825–2830, 2011.
Danilo Pianini, Sara Montagna, and Mirko Viroli.
Chemical-oriented simulation of computational systems with ALCHEMIST.
Journal of Simulation, 2013.
Marco T´ulio Ribeiro, Sameer Singh, and Carlos Guestrin.
Why should I trust you? Explaining the predictions of any classifier.
CoRR, abs/1602.04938, 2016.
Avi Rosenfeld and Ariella Richardson.
Explainability in human–agent systems.
Autonomous Agents and Multi-Agent Systems, may 2019.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
27. Bibliography
References VI
Andrew D Selbst and Julia Powles.
Meaningful information and the right to explanation.
International Data Privacy Law, 7(4):233–242, 12 2017.
Marina M.-C. Vidovic, Nico G¨ornitz, Klaus-Robert M¨uller, and Marius Kloft.
Feature importance measure for non-linear learning algorithms.
CoRR, abs/1611.07567, 2016.
Rebecca Wexler.
When a computer program keeps you in jail: How computers are harming criminal
justice.
New York Times, 2017.
Michael Wooldridge.
An Introduction to MultiAgent Systems.
Wiley Publishing, 2nd edition, 2009.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17
28. Bibliography
References VII
Luisa M. Zintgraf, Taco Cohen, Tameem Adel, and Max Welling.
Visualizing deep neural network decisions: Prediction difference analysis.
ArXiv, abs/1702.04595, 2017.
Ciatto et al. (UNIBO, HES-SO) Towards XMAS AI&IoT – Nov 21, 2019 17 / 17