Artificial Intelligence has changed the world and the law must change with it
From Skynet to the singularity, for better or worse, artificial intelligence (AI) is the future – but AI and people don’t compete on a level playing field, least of all in the eyes of the law.
For example: From a safety perspective, Al could be the best choice for driving a vehicle, but laws prohibit driverless vehicles. At the same time, a person may be better at packing boxes at a warehouse, but a business may automate because AI receives preferential tax treatment.
AI may be better at helping businesses to innovate, but businesses may not want to use AI if doing so restricts future intellectual property rights. These are some of the circumstances where the law treats AI and people differently even when they’re doing the same things, and it raises vital questions about how to regulate Al’s use and applications.
In The Reasonable Robot: Artificial Intelligence and the Law (Cambridge University Press, 2020), Ryan Abbott, Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA, argues that we would all be better off if the law did less to discriminate between people and AI.
This work should be read by anyone interested in the rapidly evolving relationship between AI and the law, and you don’t need to be an expert in either to understand the content – Abbott breaks everything down into layman’s terms, which is a gift, because ultimately, these issues affect all of us one way or another.
“Al is stepping into the shoes of people and doing the sorts of things that only people used to do. In doing so, Al is often subject to different legal rules – and this creates problems because it tends to unintentionally favour people or machines,” Abbott explains. “We would have better outcomes applying similar legal rules to behaviour by both people and Al,”
The Reasonable Robot argues that we’d be better off if the law did less to distinguish between people and AI:
- AI is held to different legal standards than we are, even when doing the same things.
- AI is taxed differently when automating jobs—this subsidizes employers to replace people with machines.
- AI is limited from generating new creative and inventive works—this prevents people from using AI in innovation even when efficient.
- AI is treated more harshly when causing accidents—this threatens human safety.
“AI is going to challenge legal standards in lots of ways people haven’t even thought about, and legal standards will also shape the way AI develops,” Abbott adds. “We need the right laws, and many people and perspectives providing input, to ensure AI is fully developed for social good.”
According to Baron Timothy Clement Jones, Chair of House of Lords Artificial Intelligence Select Committee and Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence: “The Reasonable Robot provides highly original insights into one of the most important conversations of our time. Ryan Abbott brings a unique and sometimes controversial perspective to artificial intelligence as a physician, attorney, and eminent academic, but manages to present the subject in an accessible and unintimidating manner. This book is both enlightening about the future of law and artificial intelligence as well as a great read.”
Ryan Abbott, MD, JD, MTOM, PhD, is Professor of Law and Health Sciences at the School Of Law, University of Surrey, and Adjunct Assistant Professor of Medicine at UCLA. A physician and patent attorney, Abbott’s research on law and technology has helped shape the international dialogue on these topics. He has served as an expert for the World Health Organization, the World Intellectual Property Organization, the European Commission, and the Parliament of the United Kingdom. Abbott also spearheaded the first patent applications to disclose inventions made autonomously by an AI. In 2019, he was named one of the 50 most influential people in Intellectual Property by Managing IP magazine.