Brief summary of how the law and legal practice may be affected by the ris of AI and autonomous cars, robots, etc - with a look at what harms or biases may result and how law and the market might try to solve those problems.
Lilian EdwardsProfessor of Internet Law at University of Newcastle en Newcastle .Law School
2. Preparing for the Future of AI, US, Oct 2016
What is AI?
Application of AI to the Public Good
AI and Regulation
Research and Workforce
Economic (employment) Impact of AI
Fairness, safety and Governance
Global Considerations and Security
Lethal autonomous weapons
3. What is AI?
“Although the boundaries of AI can be uncertain and have tended to shift over time, what is important
is that a core objective of AI research and applications over the years has been to automate or
replicate intelligent behaviour”
Narrow AI v General AI; the path to the singularity?
“long term concerns about superintelligent General AI should have little impact on current policy”
Machine Learning vs expert systems
Experts come up with rules, AI systems implement them
Vs statistical methods used to find a “decision procedure” that “works well in practice”
Adv of ML : can be used where difficult or impossible to come up with explicit rules eg fraudulent
logins, reversing a lorry
Disadv: the system generated may not easily exlain how or why it comes to the decisions it does =
“black box”
4. How ML works
Needs large amounts of data (“big data”); algorithm; processing power. Improvements in all since 80s.
Which has ben most important?
Creating a ML system
DATA. Divide into training set and test set.
Create a model of how system works ie a set of possible mathematical rules
Adjust the parameters of the model (creates millions of options)
Define a successful result (objective function)
Using training set of successful decisions, train system to get from data to outcome in most efficient way
Test it works using test set
Hope it can now generalise ie successfully apply induced rules to new examples not in training set.
You have created the ALGORITHM.
Essentially a process of statistical correlation – often not clear why input data X lead to result Y cf an expert
system which had a human-produced underlying rationale
DEEP LEARNING: many layers of “neurons” are used in training the system – resembling how brain
processes and creates knowledge
5. Why are algorithms important to society, governance,
innovation?
Predictive profiling – and hence manipulation - of persons
Targeted ads on Google/social networks, etc; manipulation of personalised newsfeeds
Price , service discrimination;
Criminal/terrorist profiling; -> pre-crime?
Future health, obesity, Alzheimers risks
Non-personal predictions of what is important/significant/popular/profitable;
eg “trending topics” on Twitter;
Google News; top Google results on search by keywords; top FB links to “fake news”?
Automated high frequency trading on stock exchanges;
Recommendations on Netflix/Amazon etc
Filtering online of unwanted content
Spam algorithms, Google Hell (anti SEO)
Over, under block? Twitter UK anti-women trolling cases summer 2013: ACPO “They [Twitter] are ingenious people, it can't be beyond their
wit to stop these crimes”
“Real world” as well as online effects: Algorithms to instruct robots on how to behave adaptively when circumstances
change from original programming; driverless cars liability?
See *Kohl (2013) 12 IJLIT 187.
7. “Latanya Sweeney, Arrested? 1) Enter name and state
2) Access full background. Checks instantly.
www.instantcheckmate.com”
8. Legal/regulatory issues
EXAMPLE: Autonomous cars
Huge advs – reducing road deaths; access fo diabled’ access for the rural economy; environmental
advantafges
Safety and security issues crucial to autonomous cars uptake
Degrees of autonomy – we are already part the way (4 levels)
“the goals and structures of existing regulation are sufficient and commentators called for existing
regulation to be adapted as necessary”
Dealing with liability of operators; dealing with exceptions. Unusual circumstances (“trolley problem”)
Role for existing agencies eg FAA, CAA in UK ; need for them to have access to tech experts
Most countries passing laws to allow trials on (restricted) public roads
10. Regulatory problems – 1. Fairness
1. Worries re quality of training and testing data; justice; fairness, esp in criminal justice
eg “Risk prediction” tools used for criminal sentencing or bail decisions – data may be partial or
poor quality -> racial prejudice (Angwin, Pro Publica, May 2016)
Eg hiring systems – used to screen job applicants – may reinforce bias already in system by hiring
more who like look like current workforce
Remedies?
Transparency: systems should have to be able to demonstrate fairness if questioned – accountability akin
to judicial review?
“Having an interpretable model helps
Better testing to weed out bad results: eg gorilla face recognition eg “in the wild” testing eg open source
scrutiny
Ethical training for AI researchers
11. Regulatory problems – Safety and Security
Transition from safety of lab to unpredictable “in the wild” – racist chatbot?
Open to remote hacking from third parties without obvious detection
Especially worrying for Internet of Things/ real world applications – eg Tesla experience
Need for a “matue field of AI safety engineering”
Also need for a database of outcomes so risks can be actuarialised and insurance provided –
Modern Transport Bill 2016 (UK) ties to kickstart this
Global cooperation needed for AI cybersecurity (pre Trump!)
Passing mention of “privacy and security” (p 36 US report)
14. Effect on society and employment?
AI needs dramatic growth in skilled workfoce
Current workforce notably undiverse both in colour and gender
However also likely to lose jobs via automation – primarily blue collar jobs
Alternately middle class jobs may become more efficient by augmentation/ co working with AI
Heightened inequality
Lawyers?
“AI Judges” a long way off..?
15. “In the course of developing the
programme the team found that
judgments of the European court of
human rights depends more on non-legal
facts than purely legal arguments.”
“The algorithm examined English
language data sets for 584 cases
relating to torture and degrading
treatment, fair trials and privacy.”
“The AI “judge” has reached
the same verdicts as judges
at the European court of
human rights in almost four
in five cases involving
torture, degrading treatment
and privacy.”
16. EU legal remedies? Data Protection
Directive – transparency of algorithms
Art 12: "every data subject [has] the right to obtain from the controller..
- knowledge of the logic involved in any automatic processing of data c at least in
the case of the automated decisions referred to in Article 15 (1)“
Art 15(1) : every person has the right "not to be subject to a decision which
produces legal effects concerning him or significantly affects him and which is
based solely on automated processing of data iintended to evaluate certain
personal aspects relating to him, such as his performance at work,
creditworthiness, reliability, conduct, etc.“
Rec 41: "any person must be able to exercise the right of access to data relating to
him which are being processed, in order to verify in particular the accuracy of the
data and the lawfulness of the processing“
..” this right must not adversely affect trade secrets or intellectual property and in
particular the copyright protecting the software”
17. Draft DP Regulation (Jan 16)
New Art 15: Rts of access
Right to obtain where personal data is being processed..
“(h) the existence of automated decision making including profiling [see art
20] .. And at least in those cases, meaningful information about the logic
involved, as well as the significance and envisaged consequences of such
processing..”
*Rec 51: “This right should not adversely affect the rights and freedoms of
others, including trade secrets or intellectual property…However, the result
of these considerations should not be that all information is refused to the
data subject…”
18. New regulators?
UK parliament Sci/Tech Committee report on robots and AI, 2016
“Our inquiry has illuminated many of the key ethical issues requiring serious consideration—
verification and validation, decision-making transparency, minimising bias, increasing
accountability, privacy and safety.”
“We recommend that a standing Commission on Artificial Intelligence be established, based at
the Alan Turing Institute, to examine the social, ethical and legal implications of recent and
potential developments in AI. It should focus on establishing principles to govern the
development and application of AI techniques, as well as advising the Government of any
regulation required on limits to its progression. It will need to be closely coordinated with the
work of the Council of Data Ethics which the Government is currently setting up following the
recommendation made in our Big Data Dilemma report.”
19. EU parliament draft report on robotics, 2016
“Calls for the creation of a European Agency for robotics and artificial intelligence in order to
provide the technical, ethical and regulatory expertise needed to support the relevant public
actors, at both EU and Member State level, in their efforts to ensure a timely and well-informed
response to the new opportunities and challenges arising from the technological development of
robotics;
a system of registration of advanced robots should be introduced, based on the criteria
established for the classification of robots. The system of registration and the register should be
Union-wide, covering the internal market, and should be managed by an EU Agency for Robotics
and Artificial Intelligence
20. “Code”/ self regulatory remedies?
“Those clamoring for Facebook to fix its fake news
problem should be careful what they wish for. They
might find in a few years that the fake news is gone—
but the filter bubbles, the perverse incentives, and
Facebook’s pretense to algorithmic neutrality remain.”
Slate Nov 16, 2016