EXECUTIVE SUMMARY At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern. Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues: 1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected 2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression 3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures 4. Unregulated and unmonitored forms of AI experimentation on human populations 5. The limits of technological solutions to problems of fairness, bias, and discrimination Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.