SlideShare una empresa de Scribd logo
1 de 4
Descargar para leer sin conexión
From Artificial Morality to NERD: Models, Experiments, & Robust Reflective Equilibrium

                                                          Peter Danielson1
                                   1
                                       Centre for Applied Ethics, University of British Columbia
                                                          pad@ethics.ubc.ca


                           Abstract                                   axioms constraining rational agents to conditional
Artificial ethics deploys the tools of computational science and      cooperation in prisoner’s dilemmas. It became clear when
social science to improve the improve ethics, conceived as pro-       we modeled Gauthier’s proposed strategy as a functioning
social engineering. This paper focuses on three key techniques        agent opposed by other types of agent, it was either not
used in the three stages of the research program of the Norms         rational or not very moral. Worse, the strategy we
Evolving in Response to Dilemmas (NERD) research group:               introduced that does better than Gauthier’s proposal, by
1. Artificial Morality. Technique: Moral functionalism --             demanding full reciprocity faces computational limitations.
    principles expressed as parameterized strategies and tested       We used a simple diagonal result to show that there is no
    against a simplified game theoretic goal.                         unique reciprocal strategy (Danielson 1995b; Danielson
2. Evolving Artificial Moral Ecologies. Technique: Genetic            2002). This result mirrors, at the level of agents, game
    programming, agent-based modeling and evolutionary game           theory’s folk theorem that warns us that there are infinitely
    theory (replicator dynamics).                                     many norms in equilibria, or, in (Boyd and Richerson
3. NERD (Norms Evolving in Response to Dilemmas):                     1992)’s memorable phrase, “Punishment allows the
    Computer mediated ethics for real people, problems, and           evolution of cooperation (or anything else) in sizable
    clients. Technique: An experimental platform to test and          groups.”
    improve ethical mechanisms.

                                                                                  Evolutionary Artificial Morality
                     Artificial Ethics
                                                                      AM tested tiny populations with equal numbers of agents
We take ethics to be a mixture of lore and craft, where the           selected for theoretical reasons and hand crafted in Prolog.
lore aspires to social science and the craft to social                Clearly there are several ways to improve this method:
engineering. We take the goal of artificial ethics to be a            allow arbitrary populations of agents generated for other
pragmatic: to deploy the powerful tools of computational              reasons. The second stage of the research project,
science and social science (broadly taken to include                  Evolutionary Artificial Morality (EAM) opened up the
cognitive and evolutionary social science) to drive the               generator and test of agents. Genetic programming was
science and engineering of ethics.                                    used to generate agents (Koza 1992; Danielson 1998a) of
   More concretely, this goal has three stages: 1. Demystify          unexpected kinds and agent-based simulation explored
ethics by treating moral problems as functional problems.             local interaction effects. Later, replicator dynamics was
2. Refocus on realistic mixed populations instead of                  used to simplify the population models (Skyrms 1996;
utopian monocultures of “ethical” and rational agents. 3.             Danielson 1998b). The main result of this project was to
Move from focusing on the goal constructing ethical                   confirm the persistence of mixed populations for most
machines to constructing computationally augmented                    interesting public goods problems.1 That is, one should
environments for people to manage the democratic ethics               expect neither that rational free-riders, or “ethical”
of high technology, including negotiating the possible                altruistic unconditional cooperators to ever form pure
introduction of ethical machines.                                     populations. Only reciprocal cooperators (of some flavor or
                                                                      another) can stabilize cooperation, and they cannot
                                                                      eliminate either unconditional cooperators or defecting
                                                                      free-riders.
                     Artificial Morality                                 These results have been recently generalized and
In a book and series of papers ((Danielson, 1992;                     strengthened by the weight of experimental and field data
Danielson 1992; Danielson 1995a) the Artificial Morality              (Ostrom and Walker 2003; Kurzban and Houser 2005).
(AM) project modeled the account of ethics developed by               The persistent mix of human cooperative types, which we
David Gauthier (Gauthier 1977; Gauthier 1984; Gauthier,               might label moral polymorphism, is highly significant for
1988a; Gauthier, 1988b; Danielson 1991; Gauthier 1991;                1 We avoid “social dilemmas” as it reinforces the binary
Danielson 2001a). Briefly, Gauthier’s proposal for ethics
began with the standard decision and game theoretic                   thinking that overlooked the typical tri-partite mix of
account of rational choice and recommended additional                 strategies for so long.
ethics. Not only is reciprocity underrepresented in the                                NERDII Robust Reflective Equilibrium
theory of ethics, much time has been spent on the debate
between the “ethical point of view” and the moral skeptic,                    Nonetheless, NERD-I is very much a social science
which roughly map onto the two minority positions,                            experiment. Participants are really subjects, given feedback
unconditional cooperation and free riding. Much work                          or not, blocked from moving back to earlier answers, and
remains to get ethics in touch with its real world base: the                  timed at every choice point. Arguably, democratic decision
majority population of reciprocal cooperators and their                       support should have other features.
puzzling moral emotions.                                                         While we need to know more about our norms, more
                                                                              data cannot solve – indeed deepens – the core problem of
                                                                              ethics. Ethics is fundamentally a coordination problem, of
                                    NERD I Deep Moral Data                    the unstable cooperative kind. Would-be ethical agents
                                                                              need all the help they can affordably get to find partners in
                                                                              beneficial cooperation. This is a communication problem –
Norms Evolving in Response to Dilemmas (NERD) is part                         ethical agents need a common language/protocol for
of the trend to use data from lab experiments and field                       coordination, cooperation, and bargaining, in that order.
studies to inform and constrain theory and simulation.                        While they need to be aware of possible complexities and
Supported by the Genome Canada, and working with                              complications, to be told that there are billions of unique
several large genomics labs, the NERD-I survey has                            personal stories out there is more information than they can
managed to collect and mine a rich trove of data on how                       use. This was clear from the original Artificial Morality
people make serious ethical decisions about problems and                      project: reciprocators, the core of any moral solution, need
opportunities driven by technological and social change                       simple recognition schemes and fail miserably under
(Ahmad et al. In press; Danielson et al. in press; Ahmad et                   ambiguity. We need ways to coordinate.
al. 2005). The NERD-I surveys span human bioethics                               Incidentally, this perspective undermines popular
(genetic testing for β-Thalassaemia) and environmental                        criticisms of leading proposals for ethical protocols. To the
ethics (salmon genomics and aquaculture).                                     cost/benefit protocol that effectively drives the most
                            0.45
                                                                              successfully ethics in engineering, epidemiology and
                            0.40
                                                               Feedback       public health, and integrated environmental assessment,
                                                               Non-Feedback

                            0.35
                                                                              many critics still fall back on the impossibility of
                                                                              interpersonal comparisons (Sen 2002). But since we are not
      Ave. Advisor Visits




                            0.30



                            0.25
                                                                              trying to become ideal impartial spectators but democrats
                            0.20
                                                                              negotiating our policies, we do not face this problem. Put
                                                                              another way, those who cannot tradeoff their own values
                            0.15

                                                                              against those of others are (similar to) sociopaths, not
                                                                              would-be partners in cooperation. People evidently need
                            0.10




                                                                              help to see, not to say calculate, the impact of their casual
                            0.05



                            0.00
                                                                              decisions not to donate blood or solid organs, or to
                              10.1
                              10.2
                              10.3
                              10.4
                              11.5
                              11.1
                              11.2
                              11.3
                              11.4
                              12.5
                              12.1
                              12.2
                              12.3
                              12.4
                                 .5
                               1.1
                                  2
                                  3
                                  4
                                  5
                                  1
                               2.2
                                  3
                                  4
                                  5
                                  1
                               3.2
                                  3
                                  4
                                  5
                                  1
                                  2
                                  3
                                  4
                                  5
                                  1
                                  2
                               5.3
                                  4
                                  5
                                  1
                                  2
                                  3
                               6.4
                               7. 5
                                  1
                                  2
                                  3
                                  4
                                  5
                               8.1
                                  2
                                  3
                                  4
                                  5
                               9.1
                                  2
                                  3
                                  4
                              10.5
                               1.

                               1.
                               1.
                               1.
                               2.
                               2.

                               2.
                               2.
                               3.
                               3.

                               3.
                               3.
                               4.
                               4.
                               4.
                               4.
                               4.
                               5.
                               5.
                               5.

                               5.
                               6.
                               6.
                               6.
                               6.


                               7.
                               7.
                               7.
                               7.
                               8.

                               8.
                               8.
                               8.
                               9.

                               9.
                               9.
                               9




                                                 Advisor
                                                                              vaccinate their children, but these are well-studied practical
                                                                              problems in risk communication, not symptoms of some
                             Figure 1 Feedback make little difference         deep metaphysical divide between the values of different
                                                                              persons.
                                                                                 We propose to keep philosophy in its place, which is not
   NERD-I is a new form of artificial ethics. Inspired by                     producing mysterious theoretical problems for ethics, but,
computer games like Sid Myers’ Civilization and Will                          we argue, solving the hard practical problems would-be
Wright’s Sim* franchise, NERD-I offers constrained                            democratic ethical agents face. For example, we need
choice in a rich context featuring plentiful expert and lay                   good social networking sites for democratic ethics.
advice. While we do not claim that NERD-I responses are                       Currently it is easy to find extreme ideological positions on
canonical, they have more prima facie normative weight                        many current issues in ethics and technology – such as
than much of the data gathered by surveys, polls, and focus                   genetically modified food or vaccination – but very hard to
groups. Moreover, by unifying disparate ethical domains                       find out what most people would choose and why.
(bioethics and environmental ethics are markedly distinct                        What is the role for artificial ethics here? Here is a
cultures) NERD-I extends the role of formal methods in the                    proposal: think of a recent “Turing test” whereby for many
science of ethics. For example, contrary to the common                        of us, news.google beats out the parochial editors of more
criticism that the framing effect undermines data in ethics,                  local “papers.” Similarly, many of us evidently rely on
we have show that two different powerful framing stimuli                      google for search and Amazon, Pandora, and Slashdot for
have virtually no effect on responses and advice seeking                      book, music, and article recommendations. All are
(see Figure 1).                                                               successful implementations of value driven social
                                                                              networking software. Regularly and automatically mining
the Net into bits of value, this is itself the best example of    machines which engage in our moral life and make ethical
successful artificial ethics, at least in the value subfield..2   judgments?
   NERD-II notices how acceptable this automated value               Nothing except the appeal to experimental evidence
help appears to be. We propose automated agents to                above is restricted to human agents. Therefore, the lessons
replace our hand-crafted expert and lay advisors. this has        of all three parts of the AM research program apply to
some advantages – moving us investigators farther from a          attempts to design or evolve EA.
biasing role and gives philosophical ethics a chance to be            1. EA will need the tool of reciprocity; they will
useful – in a testable way. As our Democracy, Ethics and                   need to distinguish owners, friends, innocents, and
Genomics project tested NERD-I against focus groups and                    foes. The designs will have unintended
deliberative democracy, NERD-II will allow us to test                      interactions and will need to be tested against
different methods of ethics within our open and                            unanticipated interactants, like kids, cats,
experimental framework. This is the core assignment of my                  kibitzers, and evil-doers.
theoretical graduate seminar next year.                               2. With the best intentions, no one gets to design the
   We aim at robust reflective equilibrium. ‘Equilibrium’                  world, so there will be many kinds of EA, like
signals our game theoretic roots and aspiration to a result                there are several cooperative types found in
where everyone shares common knowledge of the situation                    human agents. Attempts to certify “genuine” EA
and other agents. ‘Reflective’ adds a ‘normative’
                                                                           will be about as successful as attempts to certify
dimension: each should be pressed to reflect, answer
                                                                           professionals or perhaps limit the clients that are
challenges etc. Finally, we aspire to a robust, dynamic
public site, challenged by the unanticipated free actions of               connected on the open Internet.
a free and creative population armed with the information             3. People (and perhaps EA) will need sophisticated
and social connections available on the Internet.                          tools to evaluate the increasing complex ethical
                                                                           challenges of a world made up of people, complex
                                                                           tools, and perhaps autonomous EA. We hope our
                         Conclusion                                        own tool, NERD, will play a leading role in
                                                                           meeting such challenges.
                                                                       NERD-II is process oriented. Roughly, on the
Each of these three stages had a different take-away for              question of new entities, it asks the existing ethical
artificial ethics. Formal game theoretic and computational            agents to evaluate the request to allow new entries to
modeling can usefully extend the science of ethics. It can
show us what cannot be done, for example, reminding us of             the community 3. EA needs to be to be evaluated like
the formal limits on rationality, altruism, reciprocity, and          any other new technology. Indeed NERD was
social norms. Simulation allows us to play with more                  designed to evaluate new genomic technologies,
complex situations and agents while maintaining a link to             arguably the closest we have come to truly new
theoretical rigor.                                                    members of our extended communities. Some may
   To end on a practical note, consider my encounter with             wish to argue that they are creating new persons,
robotics. (Danielson 1992) foolishly included ‘robots’ only           which have no more need to justify their addition to
in the title, which got the book misclassified by the Library         the community than do new humans. I will leave this
of Congress. (So much for overstating one’s results.) The             moral or ethical dispute to be played out in an
EAME project suggested that one limitation of simulations             appropriate forum, perhaps NERD-IX.
could be overcome by building communities of robots, so
solve problems like “fire in the theatre”, which we did in a
very fruitful graduate seminar (Danielson 1999). Now
working with students in UBC’s Animal Welfare Program,                                Acknowledgments
we plan to deploy various robotic projects, such as robots
to play with cats and robots to test families before they         Thanks to the NERD team for making this research possible
adopt animals from shelters (Danielson 2001b). These              and fun and to Bill Harms and Rik Blok for work on the EAME
robots need to be safe, simple for cats and people to use,
                                                                  project. We are happy to acknowledge long term support from
and morally acceptable as well. To test the former and
drive the latter, we will feature them in the NERDII              SSHRC for a series of projects (Artificial Morality, Computer
context, i.e. open to full but ethically structured public        Ethics through Thick & Thin (filters), Evolving Artificial Moral
evaluation.                                                       Ecologies and Modeling Ethical Mechanisms and generous
In closing, we return to the question of EthicAlife (EA).
What does Artifical Ethics have to say about creating             support from Genome Canada/Genome BC for the Democracy
                                                                  Ethics and Genomics and Building a GE3LS Architecture
2 Of course, eBay and craigslist belong here too, but their
                                                                  3 See (Nozick 1974) Chapter 10 for a seminal discussion
inclusion offends some academics, adverse as our guild can
be to markets.                                                    this process.
projects. NSERC and NSF for support for graduate and post- Gauthier, D. (1977). Social Contract as Ideology.
                                                                    Philosophy and Public Affairs.
doctoral students.
                                                              Gauthier, D. (1984). Deterrence, maximization, and
                                                                    rationality. In D. MacLean (Ed.), The Security
                                                                    Gamble: Deterrence Dilemmas in the Nuclear Age.
                         References
                                                                    (pp. 101 - 22.). Totowa, N.J.: Rowman & Allanheld.
                                                              Gauthier, D. (1988b). Moral Artifice. Canadian Journal of
Ahmad, R., Bornik, Z., Danielson, P., Dowlatabadi, H.,              Philosophy, 18, 385-418.
       Levy, E., Longstaf, H., et al. (2005). Innovations in  Gauthier, D. (1988a). Morality, rational choice, and
       web-based public consultation: Is public opinion on          semantic representation. Social Theory and Practice,
       genomics influenced by social feedback? National             5, 173-221.
       Centre for e-Social Science, Univ. of Manchester.      Gauthier, D. (1991). Why Contractarianism?. In P.
Ahmad, R., Bornik, Z., Danielson, P., Dowlatabadi, H.,              Vallentyne (Ed.), Contractarianism and Rational
       Levy, E., Longstaf, H., et al. (In press). A Web-            Choice. New York: Cambridge Univ. Press.
       based Instrument to Model Social Norms: NERD           Koza, J. R. (1992). Genetic Programming: On the
       Design and Results. Integrated Assessment.                   Programming of Computers by Means of Natural
Boyd, R. & Richerson, P. J. (1992). Punishment allows the           Selection. Cambridge Mass.: MIT Press.
       evolution of cooperation (or anything else) in sizable Kurzban, R. & Houser, D. (2005). Experiments
       groups. Ethol. Sociobiology, 13, 171 - 195.                  investigating cooperative types in humans: A
Danielson, P. (1991). Closing the Compliance Dilemma. In            complement to evolutionary theory and simulations.
       P. Vallentyne (Ed.), Contractarianism and Rational           PNAS, 102(5), 1083-807.
       Choice: Essays on Gauthier. New York: Cambridge        Nozick, R. (1974). Anarchy, state, and utopia. New York:
       University Press.                                            Basic Books.
Danielson, P. (1992). Artificial morality: virtuous robots    Ostrom, E., & Walker, J. (2003). Trust and reciprocity :
       for virtual games. London: Routledge.                        interdisciplinary    lessons    from   experimental
Danielson, P. (1995b). From Rational to Robust:                     research. New York: Russell Sage Foundation.
       Evolutionary Tests for Cooperative Agents.             Sen, A. (2002). Rationality and Freedom. Cambridge,
Danielson, P. (1995a). Making a Moral Corporation:                  Mass.: Harvard University Press.
       Artificial Morality Applied. Financial Accounting 5.   Skyrms, B. (1996). Evolution of the social contract.
       Vancouver:      Certified   General Accountants“             Cambridge ; New York: Cambridge University
       Association.                                                 Press.
Danielson, P. (1998b). Evolution and the Social Contract.
       The Canadian Journal of Philosophy, 28(4), 627 -
       652.
Danielson, P. (1998a). Evolutionary Models of
       Cooperative Mechanisms: Artificial Morality and
       Genetic Programming. In P. Danielson (Ed.),
       Modeling Rationality, Morality, and Evolution 7.
       (pp. 423-41). New York: Oxford University Press.
Danielson, P. (1999). Robots for the Rest of Us or for the
       'Best' of Us. Ethics and Information Technology, 1,
       77 - 83.
Danielson, P. (2001b). Ethics for Robots: Open-ended
       Ethics & Technology. 1, 77 - 83.
Danielson, P. (2001a). Which Games Should Constrained
       Maximizers Play?. In C. Morris, & A. Ripstein
       (Eds.), Practical Rationality and Preference: Essays
       for David Gauthier. New York: Cambridge
       University Press.
Danielson, P. (2002). Competition among Cooperators:
       Altruism and Reciprocity. Proceedings of the
       National Academy of Sciences, 99, 7237 - 7242.
Danielson, P., Ahmad, R., Bornik, Z., Dowlatabadi, H., &
       Levy, E. (in press). Deep, Cheap, and Improvable:
       Dynamic Democratic Norms & the Ethics of
       Biotechnology. Journal of Philosophical Research,
       Special Issue on Ethics and the LIfe Sciences.

Más contenido relacionado

Similar a Artificial Morality to NERD: Models, Experiments, & Reflective Equilibrium

Information Systems Action research methods
Information Systems  Action research methodsInformation Systems  Action research methods
Information Systems Action research methodsRaimo Halinen
 
Analytical-frameworks - Methods in user-technology studies
Analytical-frameworks - Methods in user-technology studiesAnalytical-frameworks - Methods in user-technology studies
Analytical-frameworks - Methods in user-technology studiesAntti Salovaara
 
Introduction to ethical issues in public health ghaiath
Introduction to ethical issues in public health ghaiathIntroduction to ethical issues in public health ghaiath
Introduction to ethical issues in public health ghaiathDr Ghaiath Hussein
 
The Role of Construction, Intuition, and Justification in.docx
 The Role of Construction, Intuition, and Justification in.docx The Role of Construction, Intuition, and Justification in.docx
The Role of Construction, Intuition, and Justification in.docxgertrudebellgrove
 
SAMI Mental Health and KT Presentation
SAMI Mental Health and KT PresentationSAMI Mental Health and KT Presentation
SAMI Mental Health and KT PresentationCameron Norman
 
Wicked Ethics - DEFSA 2015
Wicked Ethics - DEFSA 2015Wicked Ethics - DEFSA 2015
Wicked Ethics - DEFSA 2015jason hobbs
 
Collaboration in virtual communities: a neuroscience approach
Collaboration in virtual communities: a neuroscience approachCollaboration in virtual communities: a neuroscience approach
Collaboration in virtual communities: a neuroscience approachThierry Nabeth
 
Qualitative Review (Business Research Method.pptx
Qualitative  Review (Business Research Method.pptxQualitative  Review (Business Research Method.pptx
Qualitative Review (Business Research Method.pptxAniaSajjad
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doMatthijs Pontier
 
Knowledge complex Systems, decisions, uncertaintly risk
Knowledge complex Systems, decisions, uncertaintly riskKnowledge complex Systems, decisions, uncertaintly risk
Knowledge complex Systems, decisions, uncertaintly riskFLACSO
 
Chapter 4 Administration Responsibility The Key to Administrativ.docx
Chapter 4 Administration Responsibility The Key to Administrativ.docxChapter 4 Administration Responsibility The Key to Administrativ.docx
Chapter 4 Administration Responsibility The Key to Administrativ.docxketurahhazelhurst
 
Augmenting Compassion Using Intimate Digital Media to Parallel Traditional D...
Augmenting Compassion Using Intimate Digital Media  to Parallel Traditional D...Augmenting Compassion Using Intimate Digital Media  to Parallel Traditional D...
Augmenting Compassion Using Intimate Digital Media to Parallel Traditional D...Christine Rosakranse
 
Data Science for Social Good Transforming Communities through Knowledge
Data Science for Social Good Transforming Communities through KnowledgeData Science for Social Good Transforming Communities through Knowledge
Data Science for Social Good Transforming Communities through KnowledgeAttitude Tally Academy
 
Exploring the ethics of human centred design - Marc Steen at HCDI seminar 27...
Exploring the ethics of human centred design - Marc Steen at HCDI seminar  27...Exploring the ethics of human centred design - Marc Steen at HCDI seminar  27...
Exploring the ethics of human centred design - Marc Steen at HCDI seminar 27...Marco Ajovalasit
 
DE Webinar2 Social Justice&Evaluation
DE Webinar2 Social Justice&EvaluationDE Webinar2 Social Justice&Evaluation
DE Webinar2 Social Justice&EvaluationCSONetworkJapan
 
The role of social heuristics in deliberative outcomes | María G. Navarro
The role of social heuristics in deliberative outcomes | María G. NavarroThe role of social heuristics in deliberative outcomes | María G. Navarro
The role of social heuristics in deliberative outcomes | María G. NavarroMaría G. Navarro
 
Week 2 Lecture a.pdf : Ethical Theories.
Week 2 Lecture a.pdf : Ethical Theories.Week 2 Lecture a.pdf : Ethical Theories.
Week 2 Lecture a.pdf : Ethical Theories.AhsanKhan898576
 

Similar a Artificial Morality to NERD: Models, Experiments, & Reflective Equilibrium (20)

Information Systems Action research methods
Information Systems  Action research methodsInformation Systems  Action research methods
Information Systems Action research methods
 
Analytical-frameworks - Methods in user-technology studies
Analytical-frameworks - Methods in user-technology studiesAnalytical-frameworks - Methods in user-technology studies
Analytical-frameworks - Methods in user-technology studies
 
Introduction to ethical issues in public health ghaiath
Introduction to ethical issues in public health ghaiathIntroduction to ethical issues in public health ghaiath
Introduction to ethical issues in public health ghaiath
 
The Role of Construction, Intuition, and Justification in.docx
 The Role of Construction, Intuition, and Justification in.docx The Role of Construction, Intuition, and Justification in.docx
The Role of Construction, Intuition, and Justification in.docx
 
SAMI Mental Health and KT Presentation
SAMI Mental Health and KT PresentationSAMI Mental Health and KT Presentation
SAMI Mental Health and KT Presentation
 
Wicked Ethics - DEFSA 2015
Wicked Ethics - DEFSA 2015Wicked Ethics - DEFSA 2015
Wicked Ethics - DEFSA 2015
 
supporting
supportingsupporting
supporting
 
Collaboration in virtual communities: a neuroscience approach
Collaboration in virtual communities: a neuroscience approachCollaboration in virtual communities: a neuroscience approach
Collaboration in virtual communities: a neuroscience approach
 
Qualitative Review (Business Research Method.pptx
Qualitative  Review (Business Research Method.pptxQualitative  Review (Business Research Method.pptx
Qualitative Review (Business Research Method.pptx
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans do
 
ES4 intro sec
ES4 intro secES4 intro sec
ES4 intro sec
 
Knowledge complex Systems, decisions, uncertaintly risk
Knowledge complex Systems, decisions, uncertaintly riskKnowledge complex Systems, decisions, uncertaintly risk
Knowledge complex Systems, decisions, uncertaintly risk
 
Chapter 4 Administration Responsibility The Key to Administrativ.docx
Chapter 4 Administration Responsibility The Key to Administrativ.docxChapter 4 Administration Responsibility The Key to Administrativ.docx
Chapter 4 Administration Responsibility The Key to Administrativ.docx
 
collaboration
collaborationcollaboration
collaboration
 
Augmenting Compassion Using Intimate Digital Media to Parallel Traditional D...
Augmenting Compassion Using Intimate Digital Media  to Parallel Traditional D...Augmenting Compassion Using Intimate Digital Media  to Parallel Traditional D...
Augmenting Compassion Using Intimate Digital Media to Parallel Traditional D...
 
Data Science for Social Good Transforming Communities through Knowledge
Data Science for Social Good Transforming Communities through KnowledgeData Science for Social Good Transforming Communities through Knowledge
Data Science for Social Good Transforming Communities through Knowledge
 
Exploring the ethics of human centred design - Marc Steen at HCDI seminar 27...
Exploring the ethics of human centred design - Marc Steen at HCDI seminar  27...Exploring the ethics of human centred design - Marc Steen at HCDI seminar  27...
Exploring the ethics of human centred design - Marc Steen at HCDI seminar 27...
 
DE Webinar2 Social Justice&Evaluation
DE Webinar2 Social Justice&EvaluationDE Webinar2 Social Justice&Evaluation
DE Webinar2 Social Justice&Evaluation
 
The role of social heuristics in deliberative outcomes | María G. Navarro
The role of social heuristics in deliberative outcomes | María G. NavarroThe role of social heuristics in deliberative outcomes | María G. Navarro
The role of social heuristics in deliberative outcomes | María G. Navarro
 
Week 2 Lecture a.pdf : Ethical Theories.
Week 2 Lecture a.pdf : Ethical Theories.Week 2 Lecture a.pdf : Ethical Theories.
Week 2 Lecture a.pdf : Ethical Theories.
 

Último

Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxnelietumpap1
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 

Último (20)

Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 

Artificial Morality to NERD: Models, Experiments, & Reflective Equilibrium

  • 1. From Artificial Morality to NERD: Models, Experiments, & Robust Reflective Equilibrium Peter Danielson1 1 Centre for Applied Ethics, University of British Columbia pad@ethics.ubc.ca Abstract axioms constraining rational agents to conditional Artificial ethics deploys the tools of computational science and cooperation in prisoner’s dilemmas. It became clear when social science to improve the improve ethics, conceived as pro- we modeled Gauthier’s proposed strategy as a functioning social engineering. This paper focuses on three key techniques agent opposed by other types of agent, it was either not used in the three stages of the research program of the Norms rational or not very moral. Worse, the strategy we Evolving in Response to Dilemmas (NERD) research group: introduced that does better than Gauthier’s proposal, by 1. Artificial Morality. Technique: Moral functionalism -- demanding full reciprocity faces computational limitations. principles expressed as parameterized strategies and tested We used a simple diagonal result to show that there is no against a simplified game theoretic goal. unique reciprocal strategy (Danielson 1995b; Danielson 2. Evolving Artificial Moral Ecologies. Technique: Genetic 2002). This result mirrors, at the level of agents, game programming, agent-based modeling and evolutionary game theory’s folk theorem that warns us that there are infinitely theory (replicator dynamics). many norms in equilibria, or, in (Boyd and Richerson 3. NERD (Norms Evolving in Response to Dilemmas): 1992)’s memorable phrase, “Punishment allows the Computer mediated ethics for real people, problems, and evolution of cooperation (or anything else) in sizable clients. Technique: An experimental platform to test and groups.” improve ethical mechanisms. Evolutionary Artificial Morality Artificial Ethics AM tested tiny populations with equal numbers of agents We take ethics to be a mixture of lore and craft, where the selected for theoretical reasons and hand crafted in Prolog. lore aspires to social science and the craft to social Clearly there are several ways to improve this method: engineering. We take the goal of artificial ethics to be a allow arbitrary populations of agents generated for other pragmatic: to deploy the powerful tools of computational reasons. The second stage of the research project, science and social science (broadly taken to include Evolutionary Artificial Morality (EAM) opened up the cognitive and evolutionary social science) to drive the generator and test of agents. Genetic programming was science and engineering of ethics. used to generate agents (Koza 1992; Danielson 1998a) of More concretely, this goal has three stages: 1. Demystify unexpected kinds and agent-based simulation explored ethics by treating moral problems as functional problems. local interaction effects. Later, replicator dynamics was 2. Refocus on realistic mixed populations instead of used to simplify the population models (Skyrms 1996; utopian monocultures of “ethical” and rational agents. 3. Danielson 1998b). The main result of this project was to Move from focusing on the goal constructing ethical confirm the persistence of mixed populations for most machines to constructing computationally augmented interesting public goods problems.1 That is, one should environments for people to manage the democratic ethics expect neither that rational free-riders, or “ethical” of high technology, including negotiating the possible altruistic unconditional cooperators to ever form pure introduction of ethical machines. populations. Only reciprocal cooperators (of some flavor or another) can stabilize cooperation, and they cannot eliminate either unconditional cooperators or defecting free-riders. Artificial Morality These results have been recently generalized and In a book and series of papers ((Danielson, 1992; strengthened by the weight of experimental and field data Danielson 1992; Danielson 1995a) the Artificial Morality (Ostrom and Walker 2003; Kurzban and Houser 2005). (AM) project modeled the account of ethics developed by The persistent mix of human cooperative types, which we David Gauthier (Gauthier 1977; Gauthier 1984; Gauthier, might label moral polymorphism, is highly significant for 1988a; Gauthier, 1988b; Danielson 1991; Gauthier 1991; 1 We avoid “social dilemmas” as it reinforces the binary Danielson 2001a). Briefly, Gauthier’s proposal for ethics began with the standard decision and game theoretic thinking that overlooked the typical tri-partite mix of account of rational choice and recommended additional strategies for so long.
  • 2. ethics. Not only is reciprocity underrepresented in the NERDII Robust Reflective Equilibrium theory of ethics, much time has been spent on the debate between the “ethical point of view” and the moral skeptic, Nonetheless, NERD-I is very much a social science which roughly map onto the two minority positions, experiment. Participants are really subjects, given feedback unconditional cooperation and free riding. Much work or not, blocked from moving back to earlier answers, and remains to get ethics in touch with its real world base: the timed at every choice point. Arguably, democratic decision majority population of reciprocal cooperators and their support should have other features. puzzling moral emotions. While we need to know more about our norms, more data cannot solve – indeed deepens – the core problem of ethics. Ethics is fundamentally a coordination problem, of NERD I Deep Moral Data the unstable cooperative kind. Would-be ethical agents need all the help they can affordably get to find partners in beneficial cooperation. This is a communication problem – Norms Evolving in Response to Dilemmas (NERD) is part ethical agents need a common language/protocol for of the trend to use data from lab experiments and field coordination, cooperation, and bargaining, in that order. studies to inform and constrain theory and simulation. While they need to be aware of possible complexities and Supported by the Genome Canada, and working with complications, to be told that there are billions of unique several large genomics labs, the NERD-I survey has personal stories out there is more information than they can managed to collect and mine a rich trove of data on how use. This was clear from the original Artificial Morality people make serious ethical decisions about problems and project: reciprocators, the core of any moral solution, need opportunities driven by technological and social change simple recognition schemes and fail miserably under (Ahmad et al. In press; Danielson et al. in press; Ahmad et ambiguity. We need ways to coordinate. al. 2005). The NERD-I surveys span human bioethics Incidentally, this perspective undermines popular (genetic testing for β-Thalassaemia) and environmental criticisms of leading proposals for ethical protocols. To the ethics (salmon genomics and aquaculture). cost/benefit protocol that effectively drives the most 0.45 successfully ethics in engineering, epidemiology and 0.40 Feedback public health, and integrated environmental assessment, Non-Feedback 0.35 many critics still fall back on the impossibility of interpersonal comparisons (Sen 2002). But since we are not Ave. Advisor Visits 0.30 0.25 trying to become ideal impartial spectators but democrats 0.20 negotiating our policies, we do not face this problem. Put another way, those who cannot tradeoff their own values 0.15 against those of others are (similar to) sociopaths, not would-be partners in cooperation. People evidently need 0.10 help to see, not to say calculate, the impact of their casual 0.05 0.00 decisions not to donate blood or solid organs, or to 10.1 10.2 10.3 10.4 11.5 11.1 11.2 11.3 11.4 12.5 12.1 12.2 12.3 12.4 .5 1.1 2 3 4 5 1 2.2 3 4 5 1 3.2 3 4 5 1 2 3 4 5 1 2 5.3 4 5 1 2 3 6.4 7. 5 1 2 3 4 5 8.1 2 3 4 5 9.1 2 3 4 10.5 1. 1. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3. 4. 4. 4. 4. 4. 5. 5. 5. 5. 6. 6. 6. 6. 7. 7. 7. 7. 8. 8. 8. 8. 9. 9. 9. 9 Advisor vaccinate their children, but these are well-studied practical problems in risk communication, not symptoms of some Figure 1 Feedback make little difference deep metaphysical divide between the values of different persons. We propose to keep philosophy in its place, which is not NERD-I is a new form of artificial ethics. Inspired by producing mysterious theoretical problems for ethics, but, computer games like Sid Myers’ Civilization and Will we argue, solving the hard practical problems would-be Wright’s Sim* franchise, NERD-I offers constrained democratic ethical agents face. For example, we need choice in a rich context featuring plentiful expert and lay good social networking sites for democratic ethics. advice. While we do not claim that NERD-I responses are Currently it is easy to find extreme ideological positions on canonical, they have more prima facie normative weight many current issues in ethics and technology – such as than much of the data gathered by surveys, polls, and focus genetically modified food or vaccination – but very hard to groups. Moreover, by unifying disparate ethical domains find out what most people would choose and why. (bioethics and environmental ethics are markedly distinct What is the role for artificial ethics here? Here is a cultures) NERD-I extends the role of formal methods in the proposal: think of a recent “Turing test” whereby for many science of ethics. For example, contrary to the common of us, news.google beats out the parochial editors of more criticism that the framing effect undermines data in ethics, local “papers.” Similarly, many of us evidently rely on we have show that two different powerful framing stimuli google for search and Amazon, Pandora, and Slashdot for have virtually no effect on responses and advice seeking book, music, and article recommendations. All are (see Figure 1). successful implementations of value driven social networking software. Regularly and automatically mining
  • 3. the Net into bits of value, this is itself the best example of machines which engage in our moral life and make ethical successful artificial ethics, at least in the value subfield..2 judgments? NERD-II notices how acceptable this automated value Nothing except the appeal to experimental evidence help appears to be. We propose automated agents to above is restricted to human agents. Therefore, the lessons replace our hand-crafted expert and lay advisors. this has of all three parts of the AM research program apply to some advantages – moving us investigators farther from a attempts to design or evolve EA. biasing role and gives philosophical ethics a chance to be 1. EA will need the tool of reciprocity; they will useful – in a testable way. As our Democracy, Ethics and need to distinguish owners, friends, innocents, and Genomics project tested NERD-I against focus groups and foes. The designs will have unintended deliberative democracy, NERD-II will allow us to test interactions and will need to be tested against different methods of ethics within our open and unanticipated interactants, like kids, cats, experimental framework. This is the core assignment of my kibitzers, and evil-doers. theoretical graduate seminar next year. 2. With the best intentions, no one gets to design the We aim at robust reflective equilibrium. ‘Equilibrium’ world, so there will be many kinds of EA, like signals our game theoretic roots and aspiration to a result there are several cooperative types found in where everyone shares common knowledge of the situation human agents. Attempts to certify “genuine” EA and other agents. ‘Reflective’ adds a ‘normative’ will be about as successful as attempts to certify dimension: each should be pressed to reflect, answer professionals or perhaps limit the clients that are challenges etc. Finally, we aspire to a robust, dynamic public site, challenged by the unanticipated free actions of connected on the open Internet. a free and creative population armed with the information 3. People (and perhaps EA) will need sophisticated and social connections available on the Internet. tools to evaluate the increasing complex ethical challenges of a world made up of people, complex tools, and perhaps autonomous EA. We hope our Conclusion own tool, NERD, will play a leading role in meeting such challenges. NERD-II is process oriented. Roughly, on the Each of these three stages had a different take-away for question of new entities, it asks the existing ethical artificial ethics. Formal game theoretic and computational agents to evaluate the request to allow new entries to modeling can usefully extend the science of ethics. It can show us what cannot be done, for example, reminding us of the community 3. EA needs to be to be evaluated like the formal limits on rationality, altruism, reciprocity, and any other new technology. Indeed NERD was social norms. Simulation allows us to play with more designed to evaluate new genomic technologies, complex situations and agents while maintaining a link to arguably the closest we have come to truly new theoretical rigor. members of our extended communities. Some may To end on a practical note, consider my encounter with wish to argue that they are creating new persons, robotics. (Danielson 1992) foolishly included ‘robots’ only which have no more need to justify their addition to in the title, which got the book misclassified by the Library the community than do new humans. I will leave this of Congress. (So much for overstating one’s results.) The moral or ethical dispute to be played out in an EAME project suggested that one limitation of simulations appropriate forum, perhaps NERD-IX. could be overcome by building communities of robots, so solve problems like “fire in the theatre”, which we did in a very fruitful graduate seminar (Danielson 1999). Now working with students in UBC’s Animal Welfare Program, Acknowledgments we plan to deploy various robotic projects, such as robots to play with cats and robots to test families before they Thanks to the NERD team for making this research possible adopt animals from shelters (Danielson 2001b). These and fun and to Bill Harms and Rik Blok for work on the EAME robots need to be safe, simple for cats and people to use, project. We are happy to acknowledge long term support from and morally acceptable as well. To test the former and drive the latter, we will feature them in the NERDII SSHRC for a series of projects (Artificial Morality, Computer context, i.e. open to full but ethically structured public Ethics through Thick & Thin (filters), Evolving Artificial Moral evaluation. Ecologies and Modeling Ethical Mechanisms and generous In closing, we return to the question of EthicAlife (EA). What does Artifical Ethics have to say about creating support from Genome Canada/Genome BC for the Democracy Ethics and Genomics and Building a GE3LS Architecture 2 Of course, eBay and craigslist belong here too, but their 3 See (Nozick 1974) Chapter 10 for a seminal discussion inclusion offends some academics, adverse as our guild can be to markets. this process.
  • 4. projects. NSERC and NSF for support for graduate and post- Gauthier, D. (1977). Social Contract as Ideology. Philosophy and Public Affairs. doctoral students. Gauthier, D. (1984). Deterrence, maximization, and rationality. In D. MacLean (Ed.), The Security Gamble: Deterrence Dilemmas in the Nuclear Age. References (pp. 101 - 22.). Totowa, N.J.: Rowman & Allanheld. Gauthier, D. (1988b). Moral Artifice. Canadian Journal of Ahmad, R., Bornik, Z., Danielson, P., Dowlatabadi, H., Philosophy, 18, 385-418. Levy, E., Longstaf, H., et al. (2005). Innovations in Gauthier, D. (1988a). Morality, rational choice, and web-based public consultation: Is public opinion on semantic representation. Social Theory and Practice, genomics influenced by social feedback? National 5, 173-221. Centre for e-Social Science, Univ. of Manchester. Gauthier, D. (1991). Why Contractarianism?. In P. Ahmad, R., Bornik, Z., Danielson, P., Dowlatabadi, H., Vallentyne (Ed.), Contractarianism and Rational Levy, E., Longstaf, H., et al. (In press). A Web- Choice. New York: Cambridge Univ. Press. based Instrument to Model Social Norms: NERD Koza, J. R. (1992). Genetic Programming: On the Design and Results. Integrated Assessment. Programming of Computers by Means of Natural Boyd, R. & Richerson, P. J. (1992). Punishment allows the Selection. Cambridge Mass.: MIT Press. evolution of cooperation (or anything else) in sizable Kurzban, R. & Houser, D. (2005). Experiments groups. Ethol. Sociobiology, 13, 171 - 195. investigating cooperative types in humans: A Danielson, P. (1991). Closing the Compliance Dilemma. In complement to evolutionary theory and simulations. P. Vallentyne (Ed.), Contractarianism and Rational PNAS, 102(5), 1083-807. Choice: Essays on Gauthier. New York: Cambridge Nozick, R. (1974). Anarchy, state, and utopia. New York: University Press. Basic Books. Danielson, P. (1992). Artificial morality: virtuous robots Ostrom, E., & Walker, J. (2003). Trust and reciprocity : for virtual games. London: Routledge. interdisciplinary lessons from experimental Danielson, P. (1995b). From Rational to Robust: research. New York: Russell Sage Foundation. Evolutionary Tests for Cooperative Agents. Sen, A. (2002). Rationality and Freedom. Cambridge, Danielson, P. (1995a). Making a Moral Corporation: Mass.: Harvard University Press. Artificial Morality Applied. Financial Accounting 5. Skyrms, B. (1996). Evolution of the social contract. Vancouver: Certified General Accountants“ Cambridge ; New York: Cambridge University Association. Press. Danielson, P. (1998b). Evolution and the Social Contract. The Canadian Journal of Philosophy, 28(4), 627 - 652. Danielson, P. (1998a). Evolutionary Models of Cooperative Mechanisms: Artificial Morality and Genetic Programming. In P. Danielson (Ed.), Modeling Rationality, Morality, and Evolution 7. (pp. 423-41). New York: Oxford University Press. Danielson, P. (1999). Robots for the Rest of Us or for the 'Best' of Us. Ethics and Information Technology, 1, 77 - 83. Danielson, P. (2001b). Ethics for Robots: Open-ended Ethics & Technology. 1, 77 - 83. Danielson, P. (2001a). Which Games Should Constrained Maximizers Play?. In C. Morris, & A. Ripstein (Eds.), Practical Rationality and Preference: Essays for David Gauthier. New York: Cambridge University Press. Danielson, P. (2002). Competition among Cooperators: Altruism and Reciprocity. Proceedings of the National Academy of Sciences, 99, 7237 - 7242. Danielson, P., Ahmad, R., Bornik, Z., Dowlatabadi, H., & Levy, E. (in press). Deep, Cheap, and Improvable: Dynamic Democratic Norms & the Ethics of Biotechnology. Journal of Philosophical Research, Special Issue on Ethics and the LIfe Sciences.