We present an integration of rational moral reasoning with emotional intelligence. The moral reasoning system alone could not simulate the different human reactions to the Trolley dilemma and the Footbridge dilemma. However, the combined system can simulate these human moral decision making processes. The introduction of affect in rational ethics is important when robots com-municate with humans in a practical context that includes moral relations and decisions. Moreover, the combination of ratio and affect may be useful for ap-plications in which human moral decision making behavior is simulated, for ex-ample, when agent systems or robots provide healthcare support.
2. Outline of this presentation
• SELEMCA
• Moral reasoning system
• Emotional Intelligence: Silicon Coppelia
• Moral reasoning + Silicon Coppelia = Moral Coppelia
• Future Work
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 2
3. SELEMCA
• Develop ‘Caredroids’: Robots or Computer Agents
that assist Patients and Care-deliverers
• Focus on patients who stay in long-term care facilities
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 3
6. Possible functionalities
• Care-broker: Find care that matches need patient
• Companion: Become friends with the patient to
prevent loneliness and activate the patient
• Coach: Assist the patient in making healthy choices:
Exercising, Eating healthy, Taking medicine, etc.
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 6
7. How people perceive caredroids
• People perceive caredroids in terms of:
• Affordances
• Ethics
• Aesthetics
• Realism
Delft, April 6th 2011 Kick-off CRISP 7
8. Aesthetics / Realism Caredroids
• Uncanny valley: Too human-like makes it eerie
• We associate almost-human with death: Zombies etc.
Design robots of which realism in appearance matches
realism in behavior
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 8
9. Affordances Caredroids
• Make sure the caredroid is a useful tool for the
patients and the care deliverers
Make sure the caredroid is an expert in its task
Make sure the caredroid personalizes its behavior
to the user
Delft, April 6th 2011 Kick-off CRISP 9
10. Ethics Caredroids
• Make the robot behave ethically good, so patients
perceive the robot as ethically good
• Patients are in a vulnerable position.
Moral behavior of robot is extremely important.
We focus on Medical Ethics
• Conflicts between:
1. Autonomy
2. Beneficence
3. Non-maleficence
4. Justice
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 10
11. Background Machine Ethics
• Machines are becoming more autonomous
Rosalind Picard (1997): ‘‘The greater the freedom of
a machine, the more it will need moral standards.’’
• Machines interact more with people
We should manage that machines do not harm us or
threaten our autonomy
• Machine ethics is important to establish perceived
trust in users
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 11
12. Moral reasoning system
We developed a rational moral reasoning system that
is capable of balancing between conflicting moral goals.
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 12
13. Limitations moral reasoning
• Only moral reasoning results in very cold decision-
making, only in terms of rights and duties
• Wallack, Franklin & Allen (2010): “even agents who
adhere to a deontological ethic or are utilitarians may
require emotional intelligence as well as other ‘‘supra-
rational’’ faculties, such as a sense of self and a
theory of mind”
• Tronto (1993): “Care is only thought of as good care
when it is personalized”
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 13
14. Problem: Not Able to Simulate
Trolley Dilemma vs Footbridge Dilemma
• Greene et al. (2001) find that moral dilemmas vary
systematically in the extent to which they engage
emotional processing and that these variations in
emotional engagement influence moral judgment.
• Their study was inspired by the difference between
two variants of an ethical dilemma:
Trolley dilemma (moral impersonal)
Footbridge dilemma (moral personal)
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 14
15. Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia, a model
of emotional intelligence.
• This can be projected in others for Theory of Mind
• Learns from experience Personalization
Connect Moral Reasoning to Silicon Coppelia
• More human-like moral reasoning
• Personalize moral decisions and communication
about moral reasoning
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 15
18. Silicon Coppelia + Moral Reasoning:
Decisions based on:
1. Rational influences
• Does action help me to reach my goals?
2. Affective influences
• Does action reflect Involvement I feel towards user?
• Does action reflect Distance I feel towards user?
3. Moral reasoning
• Is this action morally good?
Delft,12-12-2012 Kick-off CRISP 18
19. Results Moral Reasoning +
Silicon Coppelia
Kill 1 to Save 5 Do Nothing
Moral system
Trolley X
Footbridge X
Moral Coppelia
Trolley X
Footbridge X
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 19
20. Discussion
• The introduction of affect in rational ethics is
important when robots communicate with humans
in a practical context that includes moral relations
and decisions.
• Moreover, the combination of ratio and affect may be
useful for applications in which human moral
decision making behavior is simulated
for example, when agent systems or robots provide
healthcare support, or in entertainment settings
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 20
21. Future Work
• Persuasive Technology
Moral dilemmas about Helping vs Manipulating
• Integrate current system with
Health Care Intervention models
• Parametric design to adapt
Appearance and Behavior to User
Delft,12-12-2012 HUMANS IN SERVICE: DESIGN CHALLENGES 21
Science + Health Care + Creative Industry Triangle Patient / Care-deliverer / Robot Robot: Repetitive tasks, so that Care-deliverer has time for: Medical + Social tasks
Functionalities can all be in the same robot Same functionality can be in different kind of robots (physical robot, agent, app)
Robots are often already very human-like in appearance, but not yet in behavior
We develop Care-droids: Care-agents, Care-robots that assist care-deliverers and patients
Results: Able to match decisions medical ethical experts Behavior system matches experts medical ethics
Wallach, Franklin & Allen: Ratio / Logic alone is not enough for ethical behavior towards humans Silicon Coppelia emotional intelligence, theory of mind, personalization (through adaptation / learning from interaction)
Moral personal = more emotionally engaging
Model from media perception to let medium perceive user Ga door model, ook emotie-regulatie uitleggen Moral and Affective Decision Personalized User-centered
Robot (mijn modellen) vs Mens Multiple Choice & Emoties Wat vond dit mannetje nou van jou? Proefpersonen zien geen verschil Je zou kunnen spreken van een geslaagde Turing Test
Moral Reasoning system alone could not simulate difference trolley dilemma and footbridge dilemma. Moral Reasoning system combined with Silicon Coppélia could simulate these human moral decision making processes
1: Robot tries to convince elder to take pills 2: Entertainment Bad can also be interesting.
Persuasion: Make use of History, Social network, Gameification, Communication styles Health Care Intervention Models Increase affordances