Home Artificial Intelligence How an “AI-tocracy” emerges

How an “AI-tocracy” emerges

0
How an “AI-tocracy” emerges

Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to maintain up with technological changes that help their opponents; they may, by stifling rights, inhibit progressive economic activity and weaken the long-term condition of the country.

But a brand new study co-led by an MIT professor suggests something quite different. In China, the research finds, the federal government has increasingly deployed AI-driven facial-recognition technology to surpress dissent; has been successful at limiting protest; and in the method, has spurred the event of higher AI-based facial-recognition tools and other types of software.

“What we found is that in regions of China where there may be more unrest, that results in greater government procurement of facial-recognition AI, subsequently, by local government units resembling municipal police departments,” says MIT economist Martin Beraja, who’s co-author of a brand new paper detailing the findings.

What follows, because the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”

The students call this state of affairs an “AI-tocracy,” describing the connected cycle during which increased deployment of the AI-driven technology quells dissent while also boosting the country’s innovation capability.

The open-access paper, also called “AI-tocracy,” appears within the August issue of the . The co-authors are Beraja, who’s the Pentti Kouri Profession Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management on the London School of Economics.

To conduct the study, the students drew on multiple sorts of evidence spanning much of the last decade. To catalogue instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020.

The researchers then examined records of virtually 3 million procurement contracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly within the quarter following an episode of public unrest in that area.

Provided that Chinese government officials were clearly responding to public dissent activities by ramping up on facial-recognition technology, the researchers then examined a follow-up query: Did this approach work to suppress dissent?

The students consider that it did, although as they note within the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as a technique of getting at that query, they studied the connection between weather and political unrest in several areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest in comparison with prefectures that had not made the identical investments.

In so doing, the researchers also accounted for issues resembling whether or not greater relative wealth levels in some areas might need produced larger investments in AI-driven technologies no matter protest patterns. Nevertheless, the students still reached the identical conclusion: Facial-recognition technology was being deployed in response to past protests, after which reducing further protest levels.

“It suggests that the technology is effective in chilling unrest,” Beraja says.

Finally, the research team studied the results of increased AI demand on China’s technology sector and located the federal government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For example, firms which can be granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products within the two years after gaining the federal government contract than that they had beforehand.

“We examine if this results in greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools will not be necessarily “crowding out” different kinds of high-tech innovation.

Adding all of it up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state during which their political power is enhanced, quite than upended, after they harness technological advances.

“On this age of AI, when the technologies not only generate growth but are also technologies of repression, they could be very useful” to authoritarian regimes, Beraja says.

The finding also bears on larger questions on forms of presidency and economic growth. A major body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, partially by creating higher conditions for technological innovation. Beraja notes that the present study doesn’t contradict those earlier findings, but in examining the results of AI in use, it does discover one avenue through which authoritarian governments can generate more growth than they otherwise would have.

“This may increasingly result in cases where more autocratic institutions develop side by side with growth,” Beraja adds.

Other experts within the societal applications of AI say the paper makes a useful contribution to the sphere.

“This is a wonderful and essential paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of promoting on the Rotman School of Management on the University of Toronto. “The paper documents a positive feedback loop between using AI facial-recognition technology to observe suppress local unrest in China and the event and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”

For his or her part, the students are continuing to work on related features of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial-recognition technologies around the globe — highlighting a mechanism through which government repression could grow globally.

Support for the research was provided partially by the U.S. National Science Foundation Graduate Research Fellowship Program; the Harvard Data Science Initiative; and the British Academy’s Global Professorships program.

LEAVE A REPLY

Please enter your comment!
Please enter your name here