The document discusses developing a moral reasoning system for machines that combines top-down and bottom-up approaches. It presents a weighted association network that calculates morality scores for actions based on beliefs about how the actions facilitate goals like autonomy, non-maleficence, and beneficence. Experimental results showed the system matching decisions by medical ethics experts. The discussion covers limitations like the need for emotional intelligence and personalized care. Future work could connect the moral reasoning to a model of emotional intelligence to make decisions more human-like.
2. Outline of this presentation
• Background
• Existing approaches
• Our moral reasoning system
• Results
• Discussion
• Future
Maastricht, October 25th 2012 BNAIC 2012 2
3. Background
• Machines interact more with people
• Machines are becoming more autonomous
• Rosalind Picard (1997): ‘‘The greater the freedom of
a machine, the more it will need moral standards.’’
• We should manage that machines do not harm us or
threaten our autonomy
Maastricht, October 25th 2012 BNAIC 2012 3
4. Existing approaches
Bottom-up: Casuistry
• Look at previous (similar) cases and use statistical methods to
make a moral decision
• Based on the internet (Rzepka & Ariki, 2005)
• But: will never be better than humans
• Based on training examples in a neural network
• But: reclassification is problematic (Guarini, 2006)
• Conclusion: Casuistry alone is not enough (Guarini, 2006)
Maastricht, October 25th 2012 BNAIC 2012 4
5. Existing approaches
Top-down: Two competitors
• Utilitarianism:
• Try to maximize the total amount of utility in the world
• Ethics of rights and duties
• Individuals have rights and duties
• Learn rule-based decision procedure via machine-learning
to make moral decisions (Anderson, Anderson & Armen,
2006)
• Are these two competitors so different?
Maastricht, October 25th 2012 BNAIC 2012 5
6. Hybrid approach:
Top-down & Bottom-up
• Wallach, Franklin & Allen (2010):
"Approach Anderson, Anderson & Armen (2006) can’t
handle complexity human decisions”
• Combine top-down and bottom-up:
Neural network with Top-down processes to interpret
situation and predict possible results actions
• But: not fully implemented yet
Maastricht, October 25th 2012 BNAIC 2012 6
7. Domain: Medical Ethics
• Within SELEMCA, we develop caredroids
• Patients are in a vulnerable position. Moral behavior
of robot is extremely important.
We focus on Medical Ethics
• Conflicts between:
1. Beneficence
2. Non-maleficence
3. Autonomy
Maastricht, October 25th 2012 BNAIC 2012 7
8. Our moral reasoning system
• Combination top-down / bottom-up:
Weighted association network
Maastricht, October 25th 2012 BNAIC 2012 8
9. Calculating morality action
• Morality(Action) =
ΣGoal( Belief(facilitates(Action, Goal)) * Ambition(Goal))
Moral Goal Ambition level
0.74
Non-Maleficence
0.52
Beneficence
1
Autonomy
IF Belief(facilitates(Action, autonomy) =
max_value
THEN Moralilty(Action) = Morality(Action) + 2
Maastricht, October 25th 2012 BNAIC 2012 9
11. Discussion
• Often there exists no consensus on the correct option
with moral dilemma’s
• Dependent on context / application
• Entertainment: ‘bad’ characters can be enjoyable
• Companion robot / Virtual friend: Morality action
one of several influences
• Decision-support: Strict moral code
• Does full autonomy exist?
• Daniel Dennett (2006) “AI makes philosophy honest”
Maastricht, October 25th 2012 BNAIC 2012 11
12. Robots that behave ethically
better than humans
• Human behavior is typically far from being morally ideal
• Humans are not very good at making impartial decisions
• Machines can be good at making impartial decisions
• Machines could behave ethically better than humans
• Machines may inspire us to behave ethically better ourselves
(Anderson & Anderson, 2010)
Maastricht, October 25th 2012 BNAIC 2012 12
13. Limitations moral reasoning
• Wallack, Franklin & Allen (2010): “even agents who
adhere to a deontological ethic or are utilitarians may
require emotional intelligence as well as other ‘‘supra-
rational’’ faculties, such as a sense of self and a
theory of mind”
• Tronto (1993): “Care is only thought of as good care
when it is personalized”
• Only moral reasoning results in very cold decision-
making, only in terms of rights and duties
Maastricht, October 25th 2012 BNAIC 2012 13
14. Add Emotional Intelligence
• Previously, we developed Silicon Coppelia, a model
of emotional intelligence. This can also be projected
in others, for Theory of Mind
Connect Moral Reasoning to Silicon Coppelia
• More human-like moral reasoning
• Personalize moral decisions and communication
about moral reasoning
Maastricht, October 25th 2012 BNAIC 2012 14
Belangrijkheid doelen op basis van literatuur Regel toegevoegd, omdat volledig autonome beslissingen nooit ge-questioned mogen worden.
Robot in verschillende afdelingen ziekenhuis: - Post-natale afdeling: Robot is geweldig - Oncologie afdeling: Robot is bot, gedraagt zich ongepast, irritant wordt geschopt Autonomie Vrij van interne / externe constraints Dennett Hidden mechanisms can be overlooked als je ze niet expliciet maakt