As generative language models improve, they open up latest possibilities in fields as diverse as healthcare, law, education and science. But, as with every latest technology, it’s value considering how they might be misused. Against the backdrop of recurring online influence operations—or efforts to influence the opinions of a audience—the paper asks:
How might language models change influence operations, and what steps might be taken to mitigate this threat?
Our work brought together different backgrounds and expertise—researchers with grounding within the tactics, techniques, and procedures of online disinformation campaigns, in addition to machine learning experts within the generative artificial intelligence field—to base our evaluation on trends in each domains.
We imagine that it’s critical to investigate the specter of AI-enabled influence operations and description steps that might be taken language models are used for influence operations at scale. We hope our research will inform policymakers which might be latest to the AI or disinformation fields, and spur in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.