HKS experts discuss how to harness, and how to rein in, artificial intelligence

HKS experts discuss how to harness, and how to rein in, artificial intelligence


Image prompt: Artificial intelligence; machine learning; computing 
 

What worries us about AI?

Sheila Jasanoff
 

Sheila JasanoffARTIFICIAL INTELLIGENCE IS THE NEW DARLING of the policy world. At Davos, it was the buzzy trend of the year, as Fareed Zakaria reported in the Washington Post in January 2023. It is a technology that seems poised to change everything. It will transform our work habits, communication patterns, consumption practices, environmental footprints, and transport systems. Built into a new generation of chatbots, AI will remake how journalists write, lawyers argue, and students respond to essay questions in exams. In the hands of rogues and terrorists, it may spread misinformation, sabotage critical infrastructure, and undermine democracy. All these expectations, both promising and perilous, call for governance, and that makes AI a critically important subject for public policy.

This past spring, the Future of Life Institute, an organization dedicated to steering transformative technologies, issued an open call for all AI labs to declare a moratorium of at least six months on the training of AI systems more powerful than GPT-4. Predictably, critics attacked the proposal as unworkable, unenforceable, and likely to hinder beneficial technology development in an intensely competitive international arena. Is a moratorium the right solution? And have critics grasped the right end of the stick in their arguments against it? My answer to both questions is no.

The rise of the digital economy over the past 30 years has shown that rapid access to information is not the only good that societies need or want. The shiny dream of Silicon Valley is tarnished today by stories of fraud and hype, rising inequality, alienation, and misinformation—in short, a reality that does not comport well with the visions of liberation fervently preached by early apostles of the digital age. So what is to be done?

Given the diversity of AI applications and their rapid development, it is clear that America’s usual approach to regulating technology, which the moratorium critics support, will fall short. Typically, U.S. entrepreneurs are relatively free to design and develop new technological systems unless they are shown to pose plausible threats to human health, safety, or well-being. Until the risks become palpable, self-regulation is the order of the day. Many believe that this laissez-faire approach leads to more-efficient outcomes, with less chance of nipping breakthrough technologies in the bud through premature, possibly unenforceable controls. But what works for relatively self-contained technologies, such as vaccines and self-driving cars, is less well suited to the hydra-headed monster that AI is shaping up to be.

Nor is a six-month moratorium the right answer. The pause is not the problem. What, after all, is a six-month delay in the grand march of technological development? The important issue is not whether a moratorium is appropriate but what should happen during such a pause, and here history offers less than satisfying lessons.

Moratoriums have been under discussion in American technology policy since the famous voluntary restraint adopted in 1974 by molecular biologists developing genetic-engineering techniques using recombinant DNA (rDNA). Widely hailed as a success, that moratorium gave scientists both moral and technical standing to assert that they, and they alone, had the authority to regulate their own research. Subsequent breakthroughs in many technological areas, such as genome editing with CRISPR-Cas9 technology, have elicited similar calls for pauses, but with the thought that responsible scientists would be the ones who built frameworks of self-regulation during such periods of restraint.

The troubled history of genetic engineering, especially as applied to bioengineered crops, suggests that scientists of the gene-editing era construed the regulatory challenge too narrowly. It turned out that the risks people cared about with rDNA research did not relate only to accidental releases. They also involved people’s visceral sense of what was normal, what they were prepared to eat, and what kinds of agriculture seemed natural.

AI offers an even more elusive regulatory target. What worries us about AI? Is it the “A” for “artificial,” because a machine that is learning to think and act like a human blurs a line around human agency that has been fundamental to centuries of ethical thought? Or is it the “I” for “intelligence,” because we do not know whether machinic intelligence will combine analytic speed and a voracious appetite for information with judgment or compassion? Who, after all, would have imagined that the lip-reading computer HAL in Arthur C. Clarke’s 2001: A Space Odyssey would outfox and kill its human controllers?

Signing a six-month moratorium may feel good because it’s taking a stand on an issue of emerging concern. But to make a difference in how we deploy AI calls for a deeper, more prolonged engagement, one that arouses a society’s ethical and political intelligence. We need to bring AI back onto the agenda of deliberative democracy. That project will take more than six months, but it will be wholly worth it.

Sheila Jasanoff, the Pforzheimer Professor of Science and Technology Studies, studies the role of science and technology in policy, law, and politics.



Read More