Industry Standards as vehicle to address socio-technical AI challenges

Senior Research Fellow at Horizon Digital Economy Research
5 de Apr de 2019
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
1 de 37

Más contenido relacionado

La actualidad más candente

Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
are algorithms really a black boxare algorithms really a black box
are algorithms really a black boxAnsgar Koene
20190528 - Guidelines for Trustworthy AI20190528 - Guidelines for Trustworthy AI
20190528 - Guidelines for Trustworthy AIBrussels Legal Hackers
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyAnsgar Koene
The Age of AlgorithmsThe Age of Algorithms
The Age of AlgorithmsAnsgar Koene
IEEE P7003 at ICSE Fairware 2018IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018Ansgar Koene

La actualidad más candente(20)

Similar a Industry Standards as vehicle to address socio-technical AI challenges

Ansgar rcep algorithmic_bias_july2018Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018Ansgar Koene
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018Peerasak C.
Adversarial Attacks and DefenseAdversarial Attacks and Defense
Adversarial Attacks and DefenseKishor Datta Gupta
Tecnologías emergentes: priorizando al ciudadanoTecnologías emergentes: priorizando al ciudadano
Tecnologías emergentes: priorizando al ciudadanoComisión de Regulación de Comunicaciones
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?Nozha Boujemaa
The Skinny on FAT/ML - NewHoRRIzonThe Skinny on FAT/ML - NewHoRRIzon
The Skinny on FAT/ML - NewHoRRIzonMark Graus

Similar a Industry Standards as vehicle to address socio-technical AI challenges(20)

Más de Ansgar Koene

What is AI?What is AI?
What is AI?Ansgar Koene
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018Ansgar Koene
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsAnsgar Koene
A koene humaint_march2018A koene humaint_march2018
A koene humaint_march2018Ansgar Koene
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017Ansgar Koene
Algorithmically Mediated Online Inforamtion Access at MozFest17Algorithmically Mediated Online Inforamtion Access at MozFest17
Algorithmically Mediated Online Inforamtion Access at MozFest17Ansgar Koene

Último

9C Monthly Newsletter - SEPT 20239C Monthly Newsletter - SEPT 2023
9C Monthly Newsletter - SEPT 2023PublishingTeam
UiPath Tips and Techniques for Debugging - Session 3UiPath Tips and Techniques for Debugging - Session 3
UiPath Tips and Techniques for Debugging - Session 3DianaGray10
Solving today’s Traffic Problems with Sustainable Ride Hailing SolutionSolving today’s Traffic Problems with Sustainable Ride Hailing Solution
Solving today’s Traffic Problems with Sustainable Ride Hailing SolutionOn Demand Clone
Getting your enterprise ready for Microsoft 365 CopilotGetting your enterprise ready for Microsoft 365 Copilot
Getting your enterprise ready for Microsoft 365 CopilotVignesh Ganesan I Microsoft MVP
Google Cloud Study Jams Info SessionGoogle Cloud Study Jams Info Session
Google Cloud Study Jams Info SessionGDSCPCCE
Unleashing Innovation: IoT Project with MicroPythonUnleashing Innovation: IoT Project with MicroPython
Unleashing Innovation: IoT Project with MicroPythonVubon Roy

Último(20)

Industry Standards as vehicle to address socio-technical AI challenges

Notas del editor

  1. Industry Standards as vehicle to address socio-technical challenges from AI – the case of Algorithmic Bias Considerations. Ansgar Koene Abstract: Algorithmic decision-making technologies (colloquially referred to as “AI”) in industry, commerce and public service provision are giving rise to concerns about potential negative impacts on individuals (e.g. algorithmic discrimination bias) and the wider socio-economic fabric of society (e.g. displacement of jobs). As a response to public concerns and government inquiries a number of industry initiatives have been launched in an effort to stave off government intervention. Many of these initiatives are focusing on establishing “ethical principles” or formulating “best practices” that lack clear compliance specifications. Industry standards by contrast are well established self-regulation tools which do include compliance metrics and can be directly linked to compliance certification. This talk will discuss issues of algorithmic bias and outline ways in which a standard for Algorithmic Bias Considerations can help to minimizing unjustified, unintended and inappropriate bias in algorithmic decision making.
  2. The scholar Danielle Keats Citron cites the example of Colorado, where coders placed more than 900 incorrect rules into its public benefits system in the mid-2000s, resulting in problems like pregnant women being denied Medicaid. Similar issues in California, Citron writes in a paper, led to “overpayments, underpayments, and improper terminations of public benefits,” as foster children were incorrectly denied Medicaid. Citron writes about the need for “technological due process” — the importance of both understanding what’s happening in automated systems and being given meaningful ways to challenge them.
  3. Automated decisions are not defined by algorithms alone. Rather, they emerge from automated systems that mix human judgment, conventional software, and statistical models, all designed to serve human goals and purposes. Discerning and debating the social impact of these systems requires a holistic approach that considers: Computational and statistical aspects of the algorithmic processing; Power dynamics between the service provider and the customer; The social-political-legal-cultural context within which the system is used;
  4. All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data. When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
  5. In the absence of malicious intent, bias in algorithmic system is generally caused by: Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this. Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data. Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?