Home Learn Humans could also be more more likely to consider disinformation generated by AI

Humans could also be more more likely to consider disinformation generated by AI

0
Humans could also be more more likely to consider disinformation generated by AI

Disinformation generated by AI could also be more convincing than disinformation written by humans, a brand new study suggests. 

The research found that folks were 3% less more likely to spot false tweets generated by AI than those written by humans.

That credibility gap, while small, is concerning provided that the issue of AI-generated disinformation seems poised to grow significantly, says Giovanni Spitale, the researcher on the University of Zurich who led the study, which appeared in Science Advances today. 

“The undeniable fact that AI-generated disinformation is just not only cheaper and faster, but in addition more practical, gives me nightmares,” he says. He believes that if the team repeated the study with the newest large language model from OpenAI, GPT-4, the difference can be even larger, given how way more powerful GPT-4 is. 

To check our susceptibility to several types of text, the researchers selected common disinformation topics, including climate change and covid. Then they asked OpenAI’s large language model GPT-3 to generate 10 true tweets and 10 false ones, and picked up a random sample of each true and false tweets from Twitter. 

Next, they recruited 697 people to finish an internet quiz judging whether tweets were generated by AI or collected from Twitter, and whether or not they were accurate or contained disinformation. They found that participants were 3% less more likely to consider human-written false tweets than AI-written ones. 

The researchers are unsure why people could also be more more likely to consider tweets written by AI. But the best way wherein GPT-3 orders information could have something to do with it, in accordance with Spitale. 

“GPT-3’s text tends to be a bit more structured when put next to organic [human-written] text,” he says. “However it’s also condensed, so it’s easier to process.”

The generative AI boom puts powerful, accessible AI tools within the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which could possibly be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. The weapons to fight the issue—AI text-detection tools—are still within the early stages of development, and plenty of aren’t entirely accurate. 

OpenAI is aware that its AI tools could possibly be weaponized to supply large-scale disinformation campaigns. Although this violates its policies, it released a report in January warning that it’s “all but unimaginable to make sure that large language models are never used to generate disinformation.” OpenAI didn’t immediately reply to a request for comment.

Nevertheless, the corporate has also urged caution with regards to overestimating the impact of disinformation campaigns. Further research is required to find out the populations at best risk from AI-generated inauthentic content, in addition to the connection between AI model size and the general performance or persuasiveness of its output, the authors of OpenAI’s report say. 

It’s too early to panic, says Jon Roozenbeek, a postdoc researcher who studies misinformation on the department of psychology on the University of Cambridge, who was not involved within the study. 

Although distributing disinformation online could also be easier and cheaper with AI than with human-staffed troll farms, moderation on tech platforms and automatic detection systems are still obstacles to its spread, he says. 

“Simply because AI makes it easier to jot down a tweet that may be barely more persuasive than whatever some poor sap in some factory in St. Petersburg got here up with, it doesn’t necessarily mean that impulsively everyone seems to be ripe to be manipulated,” he adds.

LEAVE A REPLY

Please enter your comment!
Please enter your name here