AIAmericans’ attitude to AI

Published 11 January 2019

The impact of artificial intelligence technology on society is likely to be large. While the technology industry and governments currently predominate policy conversations on AI, the authors of a new report expect the public to become more influential over time. Understanding the public’s views on artificial intelligence will, therefore, be vital to future AI governance.

A report published by the Center for the Governance of AI (GovAI), housed in the Future of Humanity Institute at the University of Oxford, surveys Americans’ attitudes on artificial intelligence.

The impact of artificial intelligence technology on society is likely to be large. While the technology industry and governments currently predominate policy conversations on AI, the authors expect the public to become more influential over time. Understanding the public’s views on artificial intelligence will, therefore, be vital to future AI governance.

Oxford says that the survey, carried out by Baobao Zhang and Allan Dafoe, is one of the most comprehensive surveys focusing on the American public’s opinions on artificial intelligence to date, including 2000 participants using the survey firm YouGov.

Key findings from our report include:

·  Americans express mixed support for the development of AI. After reading a short explanation, a substantial minority (41 percent) somewhat support or strongly support the development of AI, while a smaller minority (22 percent) somewhat or strongly opposes it.

·  Among 13 AI governance challenges, American prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake and harmful content online, preventing AI cyber attacks, and protecting data privacy. All challenges were rated as “important” and as over 50 percent likely to affect a large number of people in the US in the next 10 years by the respondents. 

·  Americans have discernibly different levels of trust in different organizations to develop AI for the best interests of the public. The most trusted are university researchers and the U.S. military; the least trusted is Facebook. There was no actor for which the average respondent had “a fair amount of confidence”.

·  The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.

Allan Dafoe, commenting on the report, said: “Our results show that the public regards as important the whole space of AI governance issues, including privacy, fairness, autonomous weapons, unemployment, and other extreme risks that may arise from advanced AI. Further, the public’s support for the development of AI cannot be taken for granted. There is no organization that is highly trusted to develop AI in the public interest, though some are trusted much more than others. In order to ensure that the substantial benefits from AI are realized and broadly distributed, it is important that we work to understand and address these concerns.”