Home News Who Will Protect Us from AI-Generated Disinformation?

Who Will Protect Us from AI-Generated Disinformation?

Who Will Protect Us from AI-Generated Disinformation?

Generative AI has gone from zero to 100 in under a 12 months. While early, it’s shown its potential to rework business. That we are able to all agree on. Where we diverge is on find out how to contain the hazards it poses. 

To be clear, I’m pro innovation, and removed from a fearmonger. However the recent uptick in misinformation—largely geared toward polarization around controversial problems with the moment—has made it clear that, if left unchecked, gen AI could wreak havoc on societies.

We’ve seen this movie before with social media, but it surely took years and hard lessons for us to wake as much as its flaws. We’ve (presumably) learned something. The query today is who will help stem the tide of reality distortion from gen AI, and the way? 

Predictably, governments are starting to act. Europe is leading the charge, as they’ve increasingly demonstrated on regulating tech. The US is true behind, with President Biden issuing an executive order this past October.

However it’s going to take a world village acting together to “keep gen AI honest.” And before government might help, it needs to grasp the restrictions of obtainable approaches.

The identity problem has gotten much worse

On this recent world, truth becomes the needle in a haystack of opinions masquerading as facts. Knowing who the content comes from matters greater than ever. 

And it’s not as easy as decreeing that each social media account should be identity-verified. There may be fierce opposition to that, and in some cases anonymity is required to justifiably protect account holders. Furthermore, many consumers of the worst content don’t care whether it is credible, nor where it got here from. 

Despite those caveats, the potential role of identity in coping with gen AI is underappreciated. Skeptics, hear me out. 

Let’s imagine that regulation or social conscience cause platforms to provide every account holder these decisions: 

  1. Confirm their identity or not, and
  2. Publicly reveal their verified identity, or simply be labeled, “ID Verified”

Then the social media audience can higher determine who’s credible. Equally vital if no more so, identity supports accountability. Platforms can settle on actions to take against serial “disinformers” and repeat abusers of AI-generated content, even in the event that they pop up under different account names. 

With gen AI raising the stakes, I consider that identity—knowing exactly who posted what—is critical. Some will oppose it, and identity shouldn’t be a comprehensive answer. In reality, no solution will satisfy all stakeholders. But when regulation compels the platforms to supply identity verification to all accounts, I’m convinced the impact can be an enormous positive. 

The moderation conundrum

Content moderation—automated and human—is the last line of defense against undesirable content. Human moderation is a rough job, with risk of psychological harm from exposure to the worst humanity can offer. It’s also expensive and infrequently accused of the biased censorship the platforms strive to reduce on.

Automated moderation scales beyond human capability to address the torrents of latest content, but it surely fails to grasp context (memes being a typical example) and cultural nuances. Each types of moderation are crucial and mandatory, but they’re only a part of the reply. 

The oft-heard, conventional prescription for controlling gen AI is: “Collaboration between tech leaders, government, and civil society is required.” Sure, but what specifically?

Governments, for his or her part, can push social and media platforms to supply identity verification and prominently display it on all posts. Regulators may pave the technique to credibility metrics that really help gauge whether a source is believable. Collaboration is mandatory to develop universal standards that give specific guidance and direction so the private sector doesn’t must guess.

Finally, should or not it’s illegal to create malicious AI output? Laws to ban content meant for criminality could reduce the amount of toxic content and lighten the load on moderators. I don’t see regulation and laws as able to defeating disinformation, but they’re essential in confronting the threat.

The sunny side of the road: innovation

The promise of innovation makes me an optimist here. We are able to’t expect politicians or platform owners to completely protect against AI-generated deception. They leave a giant gap, and that is precisely what is going to encourage invention of latest technology to authenticate content and detect fakery. 

Since we now know the downside of social media, we’ve been quick to appreciate generative AI could change into an enormous net-negative for humanity, with its ability to polarize and mislead. 

Optimistically, I see advantages to multi-pronged approaches where control methods work together, first on the source, limiting creation of content designed for illegal use. Then, prior to publication, verifying the identity of those that decline anonymity. Next, clear labeling to indicate credibility rankings and the poster’s identity or lack thereof. Finally, automated and human moderation can filter out a few of the worst. I’d anticipate recent authentication technology to return online soon. 

Add all of it up, and we’ll have a a lot better, though never perfect, solution. Meanwhile, we should always construct up our skill set to determine what’s real, who’s telling the reality, and who’s attempting to idiot us. 


Please enter your comment!
Please enter your name here