SlideShare una empresa de Scribd logo
1 de 26
Responsibility and accountability in
algorithm mediated services
Ansgar Koene
Libel, Privacy, Data Protection and Online Legal Action - A Practitioner’s Guide
25 November 2016
http://unbias.wp.horizon.ac.uk/
Algorithms in the news
2
E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and
opinion on Facebook” Science, 348, 1130-1132, 2015
Echo-chamber enhancement by
NewsFeed algorithm
3
10.1 million active US Facebook users
Proportion of content that is cross-cutting
Search engine manipulation effect could
impact elections – distort competition
4
Experiments that manipulated the search rankings for information
about political candidates for 4556 undecided voters.
i. biased search rankings can shift the voting preferences of
undecided voters by 20% or more
ii. the shift can be much higher in some demographic groups
iii. such rankings can be masked so that people show no
awareness of the manipulation.
R. Epstein & R.E. Robertson “The search
engine manipulation effect (SEME) and
its possible impact on the outcome of
elections”, PNAS, 112, E4512-21, 2015
• White House: Big Data: A Report on Algorithmic Systems,
Opportunity, and Civil Rights
• Council of Europe: Committee of experts on Internet
Intermediaries (MSI-NET)
• European Parliament: Algorithmic accountability and
transparency in the digital age (Marietje Schaake MEP/ALDE)
• European Commission: eCommerce & Platforms launching 2
year investigation on algorithms
• House of Lords Communications Committee inquiry
“Children and the Internet” (ongoing)
• Commons Science and Technology Committee inquiry
“Robotics and Artificial Intelligence” (2016) ->
recommendation for standing Commission on AI
• HoL EU Internal Market Sub-Committee inquiry “Online
platforms and the EU Digital Single Market” (2016)
Governmental inquiries
5
• Partnership on Artificial Intelligence to Benefit People and
Society: consortium founded by Amazon, Facebook, Google,
Microsoft, and IBM to establishing best practices for artificial
intelligence systems and to educate the public about AI.
• IEEE Global Initiative for Ethical Considerations in artificial
Intelligence and Autonomous Systems -> development of
Standards on algorithmic bias, transparency, accountability
Industry response
6
• Similar to existing rights under the Data Protection Act
• Individuals have the right not to be subject to a decision
when:
– it is based on automated processing; and
– it produces a legal effect or a similarly significant effect
on the individual.
• You must ensure that individuals are able to:
– obtain human intervention;
– express their point of view; and
– obtain an explanation of the decision and challenge it.
GDPR: Rights related to automated
decision making and profiling
7
• The right does not apply if the decision:
– is necessary for entering into or performance of a
contract between you and the individual;
– is authorised by law (eg for the purposes of fraud or tax
evasion prevention); or
– based on explicit consent. (Article 9(2)).
• Furthermore, the right does not apply when a decision does
not have a legal or similarly significant effect on someone.
GDPR: Rights related to automated
decision making and profiling
8
When processing personal data for profiling purposes, appropriate
safeguards must be in place to:
• Ensure processing is fair and transparent by providing meaningful
information about the logic involved, the significance and envisaged
consequences.
• Use appropriate mathematical or statistical procedures.
• Implement appropriate technical and organisational measures to
enable inaccuracies to be corrected and minimise the risk of errors.
• Secure personal data proportionate to the risk to the interests and
rights of the individual and prevent discriminatory effects.
Automated decisions must not:
– concern a child; or
– be based on the processing of special categories of data unless:
• you have the explicit consent of the individual; or
• the processing is necessary for reasons of substantial public
interest on the basis of EU / Member State law.
GDPR: Rights related to automated
decision making and profiling
9
10
• A set of defined steps that if followed in the correct order
will computationally process input (instructions and/or data)
to produce a desired outcome. [Miyazaki 2012]
• From a programming perspective:
Algorithm = Logic + Control
logic is problem domain-specific and specifies what is to be
done
control is the problem-solving strategy specifying how it
should be done
• Problems have to be abstracted and structured into a set of
instructions which can be coded.
What is an algorithm?
11
Calculate the number of ghost estates in Ireland using a database of
all the properties in the country that details their occupancy and
construction status.
1. Define what is a ghost estate in terms of
(a) how many houses grouped together makes an estate?
(b) what proportion of these houses have to be empty or under-
construction for that estate to be labelled a ghost estate?
2. Combine these rules into a formula -- “a ghost estate is 10 or
more houses where over 50% are vacant or under-construction”.
3. Write a program that searches and sifts the property database to
find estates that meet the criteria and totals up the number.
• We could extend the algorithm to record coordinates of qualifying
estates and use another set of algorithms to plot them onto a map.
• In this way lots of relatively simple algorithms are structured
together to form large, often complex, recursive decision trees.
Example
12
• Defining precisely what a task/problem is (logic)
• Break that down into a precise set of instructions, factoring
in any contingencies, such as how the algorithm should
perform under different conditions (control).
• “Explain it to something as stonily stupid as a computer”
(Fuller 2008).
• Many tasks and problems are extremely difficult or
impossible to translate into algorithms and end up being
hugely oversimplified.
• Mistranslating the problem and/or solution will lead to
erroneous outcomes and random uncertainties.
The challenge of translating a
task/problem into an algorithm
13
• Algorithms are mostly presented “to be strictly rational
concerns, marrying the certainties of mathematics with the
objectivity of technology”.
• The complex set of decision making processes and practices,
and the wider systems of thought, finance, politics, legal
codes and regulations, materialities and infrastructures,
institutions, inter-personal relations, that shape their
production are not discussed.
• Algorithms are presented as objective, impartial, reliable,
and legitimate
• In reality code is not purely abstract and mathematical; it
has significant social, political, and aesthetic dimensions.
The myth of algorithms
14
• Algorithm are created through: trial and error, play,
collaboration, discussion, and negotiation.
• They are teased into being: edited, revised, deleted and
restarted, shared with others, passing through multiple
iterations stretched out over time and space.
• They are always somewhat uncertain, provisional and messy
fragile accomplishments.
• Algorithmic systems are not standalone little boxes, but
massive, networked ones with hundreds of hands reaching
into them, tweaking and tuning, swapping out parts and
experimenting with new arrangements.
Algorithm creation
15
• Company algorithms provide a competitive edge which they are
reluctant to expose with non-disclosure agreements in place.
• They also want to limit the ability of users to game the
algorithm to unfairly gain competitive edge.
• Many algorithms are designed to be reactive and mutable to
inputs. E.g.: Facebook’s NewsFeed algorithm does not act from
above in a static, fixed manner. Posts are ordered dependent on
how one interacts with ‘friends’. The parameters are
contextually weighted and fluid.
In other cases, randomness might be built into an algorithm’s
design meaning its outcomes can never be perfectly predicted.
The transparency challenge
16
• Deconstructing and tracing how an algorithm is constructed
in code and mutates over time is not straightforward.
• Code often takes the form of a “Big Ball of Mud”: “[a]
haphazardly structured, sprawling, sloppy, duct-tape and
bailing wire, spaghetti code jungle”.
Examining pseudo-code/source code
17
• Reverse engineering is the process of articulating the
specifications of a system through a rigorous examination
drawing on domain knowledge, observation, and deduction
to unearth a model of how that system works.
• By examining what data is fed into an algorithm and what
output is produced it is possible to start to reverse engineer
how the recipe of the algorithm is composed (how it weights
and preferences some criteria) and what it does.
Reverse engineering
18
Operationalizing “fairness” in algorithms
19
Source: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786
• We want a fair mapping f: CS -> DS
• We do not know CS, we can only approximate it through
observation.
• Thus we are dealing with f: OS ->DS
• Equality of outcomes:
– [We’re All Equal] assume that all groups are similar in CS,
group differences in OS are due to observation bias.
• Equality of treatment:
– [WYSIWYG] assume OS is true representation of CS.
equality of outcomes vs. equality of treatment
20
• Certification: test the system with representative data sets X
and Y.
– Problem: how to guarantee representative data in CS
Certifying disparate impact
21
Source: http://arxiv.org/abs/1609.07236
• Assume bias in CS -> OS mapping
• Perform re-mapping such that OS distribution X=1 and X=0
groups is same
Removing disparate impact
22
X=1
X=0 re-mapped X
• Fairness is fundamentally a societally defined construct (e.g.
equality of outcomes vs equality of treatment)
– Cultural differences between nations/jurisdictions
– Cultural changes in time
• “Code is Law”: Algorithms, like laws, both operationalize and
entrench spatio-temporal values
• Algorithms, like the law, must be:
– transparent
– adaptable to change (by a balanced process)
Problems
23
EPSRC funded UnBias project
24
http://unbias.wp.horizon.ac.uk/
• WP1: ‘Youth Juries’ workshops with “digital natives” to co-
produce citizen education materials on
filtering/recommendation algorithms
• WP2: Hackathons and double-blind testing to produce user-
friendly open source tools for benchmarking and visualizing
biases in algorithms
• WP3: Interviews and user observation to derive requirements
for algorithms that satisfy subjective criteria of bias
avoidance
• WP4: Broad stakeholder focus groups to develop policy briefs
for an information and education governance framework
Project activities
25
Ansgar.koene@nottingham.ac.uk
@UnBias_algos
http://unbias.wp.horizon.ac.uk
Questions?
Source material:
Defining algorithms
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786
Mathematical definition of fairness
http://arxiv.org/abs/1609.07236
Certifying and removing disparate impact
http://arxiv.org/abs/1412.3756

Más contenido relacionado

La actualidad más candente

A koene humaint_march2018
A koene humaint_march2018A koene humaint_march2018
A koene humaint_march2018Ansgar Koene
 
AI and us communicating for algorithmic bias awareness
AI and us communicating for algorithmic bias awarenessAI and us communicating for algorithmic bias awareness
AI and us communicating for algorithmic bias awarenessAnsgar Koene
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyAnsgar Koene
 
are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black boxAnsgar Koene
 
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Ansgar Koene
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementAnsgar Koene
 
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Ansgar Koene
 
Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018Ansgar Koene
 
The Age of Algorithms
The Age of AlgorithmsThe Age of Algorithms
The Age of AlgorithmsAnsgar Koene
 
A koene ai_in_command_control
A koene ai_in_command_controlA koene ai_in_command_control
A koene ai_in_command_controlAnsgar Koene
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Ansgar Koene
 
IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018Ansgar Koene
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAnsgar Koene
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesAnsgar Koene
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017Ansgar Koene
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Saidot
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Patrick Van Renterghem
 
AAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, BrazilAAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, BrazilCharith Perera
 

La actualidad más candente (20)

A koene humaint_march2018
A koene humaint_march2018A koene humaint_march2018
A koene humaint_march2018
 
AI and us communicating for algorithmic bias awareness
AI and us communicating for algorithmic bias awarenessAI and us communicating for algorithmic bias awareness
AI and us communicating for algorithmic bias awareness
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and Policy
 
are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black box
 
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
 
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Gove...
 
Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018Ansgar rcep algorithmic_bias_july2018
Ansgar rcep algorithmic_bias_july2018
 
The Age of Algorithms
The Age of AlgorithmsThe Age of Algorithms
The Age of Algorithms
 
A koene ai_in_command_control
A koene ai_in_command_controlA koene ai_in_command_control
A koene ai_in_command_control
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17
 
IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017
 
Model bias in AI
Model bias in AIModel bias in AI
Model bias in AI
 
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
 
AAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, BrazilAAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
 
Racial and gender bias in AI
Racial and gender bias in AI Racial and gender bias in AI
Racial and gender bias in AI
 

Similar a Bsa cpd a_koene2016

Putting data science into perspective
Putting data science into perspectivePutting data science into perspective
Putting data science into perspectiveSravan Ankaraju
 
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docxSHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docxmaoanderton
 
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018Joe Keating
 
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018Joe Keating
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...Edge AI and Vision Alliance
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Debmalya Biswas
 
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and GovernanceGRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and GovernanceAndrew Clark
 
What Do You Do with a Problem Like AI?
What Do You Do with a Problem Like AI?What Do You Do with a Problem Like AI?
What Do You Do with a Problem Like AI?Lilian Edwards
 
Algorithms and bias: What lenders need to know
Algorithms and bias: What lenders need to knowAlgorithms and bias: What lenders need to know
Algorithms and bias: What lenders need to knowWhite & Case
 
From information to intelligence
From information to intelligence From information to intelligence
From information to intelligence Srini Koushik
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
 
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...Bernhard Rieder
 
ADS System (VAC).pptx
ADS System (VAC).pptxADS System (VAC).pptx
ADS System (VAC).pptxtechnospotxyz
 
Big data primer - an introduction to data exploitation.
Big data primer - an introduction to data exploitation.Big data primer - an introduction to data exploitation.
Big data primer - an introduction to data exploitation.pedmunds
 
A-Level Presentation - 44 Moral and ethical issues.pptx
A-Level Presentation - 44 Moral and ethical issues.pptxA-Level Presentation - 44 Moral and ethical issues.pptx
A-Level Presentation - 44 Moral and ethical issues.pptxssuser569157
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfMahdi_Fahmideh
 
Intelligence Density
Intelligence DensityIntelligence Density
Intelligence DensityAhmed Zyada
 
Machine Learning: A Fast Review
Machine Learning: A Fast ReviewMachine Learning: A Fast Review
Machine Learning: A Fast ReviewAhmad Ali Abin
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?Nozha Boujemaa
 

Similar a Bsa cpd a_koene2016 (20)

Putting data science into perspective
Putting data science into perspectivePutting data science into perspective
Putting data science into perspective
 
Machine learning in Banks
Machine learning in BanksMachine learning in Banks
Machine learning in Banks
 
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docxSHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
 
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation Slides - Ethical Data Science - BoI Analytics Connect 2018
 
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018
Glantus Presentation: Ethical Data Science - BoI Analytics Connect 2018
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020Ethical AI - Open Compliance Summit 2020
Ethical AI - Open Compliance Summit 2020
 
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and GovernanceGRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance
GRC 2020 - IIA - ISACA Machine Learning Monitoring, Compliance and Governance
 
What Do You Do with a Problem Like AI?
What Do You Do with a Problem Like AI?What Do You Do with a Problem Like AI?
What Do You Do with a Problem Like AI?
 
Algorithms and bias: What lenders need to know
Algorithms and bias: What lenders need to knowAlgorithms and bias: What lenders need to know
Algorithms and bias: What lenders need to know
 
From information to intelligence
From information to intelligence From information to intelligence
From information to intelligence
 
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
 
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
 
ADS System (VAC).pptx
ADS System (VAC).pptxADS System (VAC).pptx
ADS System (VAC).pptx
 
Big data primer - an introduction to data exploitation.
Big data primer - an introduction to data exploitation.Big data primer - an introduction to data exploitation.
Big data primer - an introduction to data exploitation.
 
A-Level Presentation - 44 Moral and ethical issues.pptx
A-Level Presentation - 44 Moral and ethical issues.pptxA-Level Presentation - 44 Moral and ethical issues.pptx
A-Level Presentation - 44 Moral and ethical issues.pptx
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
 
Intelligence Density
Intelligence DensityIntelligence Density
Intelligence Density
 
Machine Learning: A Fast Review
Machine Learning: A Fast ReviewMachine Learning: A Fast Review
Machine Learning: A Fast Review
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 

Más de Ansgar Koene

A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018Ansgar Koene
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsAnsgar Koene
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017Ansgar Koene
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustAnsgar Koene
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Ansgar Koene
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_smaAnsgar Koene
 

Más de Ansgar Koene (7)

What is AI?
What is AI?What is AI?
What is AI?
 
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias Considerations
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User Trust
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_sma
 

Último

Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 

Último (20)

Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 

Bsa cpd a_koene2016

  • 1. Responsibility and accountability in algorithm mediated services Ansgar Koene Libel, Privacy, Data Protection and Online Legal Action - A Practitioner’s Guide 25 November 2016 http://unbias.wp.horizon.ac.uk/
  • 3. E. Bakshy, S. Medding & L.A. Adamic, “Exposure to ideologically diverse news and opinion on Facebook” Science, 348, 1130-1132, 2015 Echo-chamber enhancement by NewsFeed algorithm 3 10.1 million active US Facebook users Proportion of content that is cross-cutting
  • 4. Search engine manipulation effect could impact elections – distort competition 4 Experiments that manipulated the search rankings for information about political candidates for 4556 undecided voters. i. biased search rankings can shift the voting preferences of undecided voters by 20% or more ii. the shift can be much higher in some demographic groups iii. such rankings can be masked so that people show no awareness of the manipulation. R. Epstein & R.E. Robertson “The search engine manipulation effect (SEME) and its possible impact on the outcome of elections”, PNAS, 112, E4512-21, 2015
  • 5. • White House: Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights • Council of Europe: Committee of experts on Internet Intermediaries (MSI-NET) • European Parliament: Algorithmic accountability and transparency in the digital age (Marietje Schaake MEP/ALDE) • European Commission: eCommerce & Platforms launching 2 year investigation on algorithms • House of Lords Communications Committee inquiry “Children and the Internet” (ongoing) • Commons Science and Technology Committee inquiry “Robotics and Artificial Intelligence” (2016) -> recommendation for standing Commission on AI • HoL EU Internal Market Sub-Committee inquiry “Online platforms and the EU Digital Single Market” (2016) Governmental inquiries 5
  • 6. • Partnership on Artificial Intelligence to Benefit People and Society: consortium founded by Amazon, Facebook, Google, Microsoft, and IBM to establishing best practices for artificial intelligence systems and to educate the public about AI. • IEEE Global Initiative for Ethical Considerations in artificial Intelligence and Autonomous Systems -> development of Standards on algorithmic bias, transparency, accountability Industry response 6
  • 7. • Similar to existing rights under the Data Protection Act • Individuals have the right not to be subject to a decision when: – it is based on automated processing; and – it produces a legal effect or a similarly significant effect on the individual. • You must ensure that individuals are able to: – obtain human intervention; – express their point of view; and – obtain an explanation of the decision and challenge it. GDPR: Rights related to automated decision making and profiling 7
  • 8. • The right does not apply if the decision: – is necessary for entering into or performance of a contract between you and the individual; – is authorised by law (eg for the purposes of fraud or tax evasion prevention); or – based on explicit consent. (Article 9(2)). • Furthermore, the right does not apply when a decision does not have a legal or similarly significant effect on someone. GDPR: Rights related to automated decision making and profiling 8
  • 9. When processing personal data for profiling purposes, appropriate safeguards must be in place to: • Ensure processing is fair and transparent by providing meaningful information about the logic involved, the significance and envisaged consequences. • Use appropriate mathematical or statistical procedures. • Implement appropriate technical and organisational measures to enable inaccuracies to be corrected and minimise the risk of errors. • Secure personal data proportionate to the risk to the interests and rights of the individual and prevent discriminatory effects. Automated decisions must not: – concern a child; or – be based on the processing of special categories of data unless: • you have the explicit consent of the individual; or • the processing is necessary for reasons of substantial public interest on the basis of EU / Member State law. GDPR: Rights related to automated decision making and profiling 9
  • 10. 10
  • 11. • A set of defined steps that if followed in the correct order will computationally process input (instructions and/or data) to produce a desired outcome. [Miyazaki 2012] • From a programming perspective: Algorithm = Logic + Control logic is problem domain-specific and specifies what is to be done control is the problem-solving strategy specifying how it should be done • Problems have to be abstracted and structured into a set of instructions which can be coded. What is an algorithm? 11
  • 12. Calculate the number of ghost estates in Ireland using a database of all the properties in the country that details their occupancy and construction status. 1. Define what is a ghost estate in terms of (a) how many houses grouped together makes an estate? (b) what proportion of these houses have to be empty or under- construction for that estate to be labelled a ghost estate? 2. Combine these rules into a formula -- “a ghost estate is 10 or more houses where over 50% are vacant or under-construction”. 3. Write a program that searches and sifts the property database to find estates that meet the criteria and totals up the number. • We could extend the algorithm to record coordinates of qualifying estates and use another set of algorithms to plot them onto a map. • In this way lots of relatively simple algorithms are structured together to form large, often complex, recursive decision trees. Example 12
  • 13. • Defining precisely what a task/problem is (logic) • Break that down into a precise set of instructions, factoring in any contingencies, such as how the algorithm should perform under different conditions (control). • “Explain it to something as stonily stupid as a computer” (Fuller 2008). • Many tasks and problems are extremely difficult or impossible to translate into algorithms and end up being hugely oversimplified. • Mistranslating the problem and/or solution will lead to erroneous outcomes and random uncertainties. The challenge of translating a task/problem into an algorithm 13
  • 14. • Algorithms are mostly presented “to be strictly rational concerns, marrying the certainties of mathematics with the objectivity of technology”. • The complex set of decision making processes and practices, and the wider systems of thought, finance, politics, legal codes and regulations, materialities and infrastructures, institutions, inter-personal relations, that shape their production are not discussed. • Algorithms are presented as objective, impartial, reliable, and legitimate • In reality code is not purely abstract and mathematical; it has significant social, political, and aesthetic dimensions. The myth of algorithms 14
  • 15. • Algorithm are created through: trial and error, play, collaboration, discussion, and negotiation. • They are teased into being: edited, revised, deleted and restarted, shared with others, passing through multiple iterations stretched out over time and space. • They are always somewhat uncertain, provisional and messy fragile accomplishments. • Algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements. Algorithm creation 15
  • 16. • Company algorithms provide a competitive edge which they are reluctant to expose with non-disclosure agreements in place. • They also want to limit the ability of users to game the algorithm to unfairly gain competitive edge. • Many algorithms are designed to be reactive and mutable to inputs. E.g.: Facebook’s NewsFeed algorithm does not act from above in a static, fixed manner. Posts are ordered dependent on how one interacts with ‘friends’. The parameters are contextually weighted and fluid. In other cases, randomness might be built into an algorithm’s design meaning its outcomes can never be perfectly predicted. The transparency challenge 16
  • 17. • Deconstructing and tracing how an algorithm is constructed in code and mutates over time is not straightforward. • Code often takes the form of a “Big Ball of Mud”: “[a] haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle”. Examining pseudo-code/source code 17
  • 18. • Reverse engineering is the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works. • By examining what data is fed into an algorithm and what output is produced it is possible to start to reverse engineer how the recipe of the algorithm is composed (how it weights and preferences some criteria) and what it does. Reverse engineering 18
  • 19. Operationalizing “fairness” in algorithms 19 Source: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786
  • 20. • We want a fair mapping f: CS -> DS • We do not know CS, we can only approximate it through observation. • Thus we are dealing with f: OS ->DS • Equality of outcomes: – [We’re All Equal] assume that all groups are similar in CS, group differences in OS are due to observation bias. • Equality of treatment: – [WYSIWYG] assume OS is true representation of CS. equality of outcomes vs. equality of treatment 20
  • 21. • Certification: test the system with representative data sets X and Y. – Problem: how to guarantee representative data in CS Certifying disparate impact 21 Source: http://arxiv.org/abs/1609.07236
  • 22. • Assume bias in CS -> OS mapping • Perform re-mapping such that OS distribution X=1 and X=0 groups is same Removing disparate impact 22 X=1 X=0 re-mapped X
  • 23. • Fairness is fundamentally a societally defined construct (e.g. equality of outcomes vs equality of treatment) – Cultural differences between nations/jurisdictions – Cultural changes in time • “Code is Law”: Algorithms, like laws, both operationalize and entrench spatio-temporal values • Algorithms, like the law, must be: – transparent – adaptable to change (by a balanced process) Problems 23
  • 24. EPSRC funded UnBias project 24 http://unbias.wp.horizon.ac.uk/
  • 25. • WP1: ‘Youth Juries’ workshops with “digital natives” to co- produce citizen education materials on filtering/recommendation algorithms • WP2: Hackathons and double-blind testing to produce user- friendly open source tools for benchmarking and visualizing biases in algorithms • WP3: Interviews and user observation to derive requirements for algorithms that satisfy subjective criteria of bias avoidance • WP4: Broad stakeholder focus groups to develop policy briefs for an information and education governance framework Project activities 25
  • 26. Ansgar.koene@nottingham.ac.uk @UnBias_algos http://unbias.wp.horizon.ac.uk Questions? Source material: Defining algorithms http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2515786 Mathematical definition of fairness http://arxiv.org/abs/1609.07236 Certifying and removing disparate impact http://arxiv.org/abs/1412.3756

Notas del editor

  1. researchers might search Google using the same terms on multiple computers in multiple jurisdictions to get a sense of how its PageRank algorithm is constructed and works in practice (Mahnke and Uprichard 2014), or they might experiment with posting and interacting with posts on Facebook to try and determine how its EdgeRank algorithm positions and prioritises posts in user time lines (Bucher 2012), or they might use proxy servers and feed dummy user profiles into e- commerce systems to see how prices might vary across users and locales (Wall Street Journal, detailed in Diakopoulos 2013).