Presentation on the IEEE Global Initiative for Ethics of Autonomous and Intelligent Systems presented at the KAIST workshop on Taming AI: Engineering, Ethics and Policy, June 2018
Lions New Portal from Narsimha Raju Dichpally 320D.pptx
Taming AI Engineering Ethics and Policy
1. 1
IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems:
Industry Standards and Ethically Aligned Design
Ansgar Koene, Chair P7003 Algorithmic Bias Considerations
Senior Research Fellow, University of Nottingham
21 June 2018
5. Ethically Aligned Design, v2
5
• More than one hundred pragmatic recommendations for
technologists, policy makers and academics
• Created by 250+ global cross-disciplinary thought leaders
6. Summary of content EAD v2
Committees featured in EADv1 (with updated content)
General Principles 20-32
Embedding Values into Autonomous Intelligent Systems 33-54
Methodologies to Guide Ethical Research and Design 55-72
Safety and Beneficence of Artificial General Intelligence
(AGI) and Artificial Superintelligence (ASI) 73-82
Personal Data and Individual Access Control 83-112
Reframing Autonomous Weapons Systems 113-130
Economics/Humanitarian Issues 131-145
Law 146-161
New Committees for EADv2 (with new content)
Affective Computing 162-181
Policy 182-192
Classical Ethics in A/IS 193-216
Mixed Reality in ICT 217-239
Well-being 240-263
6
7. IEEE P70xx Standards Projects
IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
IEEE P7001: Transparency of Autonomous Systems
IEEE P7002: Data Privacy Process
IEEE P7003: Algorithmic Bias Considerations
IEEE P7004: Child and Student Data Governance
IEEE P7005: Employer Data Governance
IEEE P7006: Personal Data AI Agent Working Group
IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation
Systems
IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous
Systems
IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems
IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources
IEEE P7012: Standard for Machines Readable Personal Privacy Terms
7
12. Case study: Recidivism risk prediction
COMPAS recidivism prediction tool
– Built by a commercial company, Northpointe, Inc.
Estimates likelihood of criminals re-offending in future
– Inputs: Based on a long questionnaire
– Outputs: Used across US by judges and parole officers
Are COMPAS’ estimates fair to salient social groups?
12
Machine Bias: There’s software used across the
country to predict future criminals. Propublica
13. Case study: Recidivism risk prediction
13
Is the algorithm fair
to all groups?
When base rates differ, no non-trivial solution can achieve similar FPR,
FNR, FDR, FOR!
14. 14
Open invitation to join the P7003 working group
http://sites.ieee.org/sagroups-7003/
15. Key question when developing or
deploying an algorithmic system
15
Who will be affected?
What are the decision/optimization criteria?
How are these criteria justified?
Are these justifications acceptable in the context where the system is
used?
16. P7003 foundational sections
Taxonomy of Algorithmic Bias
Legal frameworks related to Bias
Psychology of Bias
Cultural aspects
16
P7003 algorithm development sections
Algorithmic system design stages
Person categorization and identifying affected population groups
Assurance of representativeness of testing/training/validation data
Evaluation of system outcomes
Evaluation of algorithmic processing
Assessment of resilience against external manipulation to Bias
Documentation of criteria, scope and justifications of choices
17. Related AI standards activities
British Standards Institute (BSI) – BS 8611 Ethics design and application
of robots
ISO/IEC JTC 1/SC 42 Artificial Intelligence
– SG 1 Computational approaches and characteristics of AI
systems
– SG 2 Trustworthiness
– SG 3 Use cases and applications
– WG 1 Foundational standards
Jan 2018 China published “Artificial Intelligence Standardization White
Paper.”
18. ACM Principles on Algorithmic
Transparency and Accountability
Awareness
Access and Redress
Accountability
Explanation
Data Provenance
Auditability
Validation and Testing
18
19. FAT/ML: Principles for Accountable
Algorithms and a Social Impact
Statement for Algorithms
Responsibility: Externally visible avenues of redress for adverse effects, and
designate an internal role responsible for timely remedy of such issues.
Explainability: Ensure algorithmic decisions and data driving those decisions
can be explained to end-users/stakeholders in non-technical terms.
Accuracy: Identify, log, and articulate sources of error and uncertainty so that
expected/worst case implications can inform mitigation procedures.
Auditability: Enable third parties to probe, understand, and review algorithm
behavior through disclosure of information that enables monitoring, checking,
or criticism, including detailed documentation, technically suitable APIs, and
permissive terms of use.
Fairness: Ensure algorithmic decisions do not create discriminatory or unjust
impacts when comparing across different demographics.
https://www.fatml.org/resources/principles-for-accountable-algorithms
19
20. Thank you for your attention
20
http://unbias.wp.horizon.ac.uk/
ansgar.koene@nottingham.ac.uk