William Regli, PhD
University of Maryland at College Park
Will artificial intelligence (AI) technology empower a magnificent future, or will the risks of AI manifest themselves in a way that threatens human existence? Wall Street and Silicon Valley (i.e., many of whom are “AI boomers”) mostly believe that AI will make us productive, rich, and prosperous in a transition of our future that portends super-charged productivity and applications that eliminate worry and tedium. Others (i.e., those who are the “AI doomers”) have argued that unfettered AI will supersede humans and pose risks to human civilization. Between these two extremes, people worry that AI will exacerbate global climate change through power consumption, displace human workers, and enable new dystopias.
As with all new technologies, there are benefits and there are risks. We understand some risks and some are new—a product of the new technology. Risks must ultimately be recognized and managed to maximize the benefits of a technology. We do this in everyday life, for example, with transportation. We have safety standards and rules of the road; we mandate seat belts and certify airworthiness. And yet, we still put up with costs due to risks (i.e., accidents, insurance, infrastructure). With information technologies, we now confront cyber threats, and we have realized that we must establish processes and procedures to mitigate cyber-enabled harm such as phishing, denial-of-service attacks, and financial frauds. As risks are presented to individuals, organizations, and society, we have established procedures and methods to manage risks in each case, whether through education, training, or engineering solutions.
When it comes to AI, thinking about risk requires answering two questions. First, what is AI? There is a wide variety of opinions about what constitutes AI. Second, if we can agree on what AI is, what new threats does it pose, and how should we categorize them? Existential risk scenarios receive inordinate attention due to their dramatic nature, but this focus can crowd out more realistic assessments of likely impacts and the harms they might cause. We are the proverbial frog in gently warming water—focusing too much on the existential risk which is, in itself, a risk, and whereby we may overlook the more mundane issues that are gradually raising the temperature. Here, we will highlight several practical threats posed by AI technology as examples of a process for assessing realistic risks.