Home Learn Bans on deepfakes take us only to this point—here’s what we actually need Now read the remaining of The Algorithm

Bans on deepfakes take us only to this point—here’s what we actually need Now read the remaining of The Algorithm

0
Bans on deepfakes take us only to this point—here’s what we actually need
Now read the remaining of The Algorithm

There was some really encouraging news within the fight against deepfakes. A few weeks ago the US Federal Trade Commission announced it’s finalizing rules banning the usage of deepfakes that impersonate people. Leading AI startups and large tech firms also unveiled their voluntary commitments to combatting the deceptive use of AI in 2024 elections. And last Friday, a bunch of civil society groups, including the Way forward for Life Institute, SAG-AFTRA, and Encode Justice got here out with a brand new campaign calling for a ban on deepfakes.

These initiatives are an awesome start and lift public awareness—however the devil can be in the main points. Existing rules within the UK and a few US states already ban the creation and/or dissemination of deepfakes. The FTC would make it illegal for AI platforms to create content that impersonates people and would allow the agency to force scammers to return the cash they constructed from such scams. 

But there’s an enormous elephant within the room: outright bans may not even be technically feasible. There isn’t any button someone can flick on and off, says Daniel Leufer, a senior policy analyst on the digital rights organization Access Now. 

That’s since the genie is out of the bottle. 

Big Tech gets a variety of heat for the harm deepfakes cause, but to their credit, these firms do try to make use of their content moderation systems to detect and block attempts to generate, say, deepfake porn. (That’s to not say they’re perfect. The deepfake porn targeting Taylor Swift reportedly got here from a Microsoft system.) 

The larger problem is that most of the harmful deepfakes come from open-source systems or systems built by state actors, they usually are disseminated on end-to-end-encrypted platforms corresponding to Telegram, where they can not be traced.

Regulation really must tackle every actor within the deepfake pipeline, says Leufer. That will mean holding firms big and small accountable for allowing not only the creation of deepfakes but additionally their spread. So “model marketplaces,” corresponding to Hugging Face or GitHub, may must be included in talks about regulation to slow the spread of deepfakes. 

These model marketplaces make it easy to access open-source models corresponding to Stable Diffusion, which individuals can use to construct their very own deepfake apps. These platforms are already taking motion. Hugging Face and GitHub have put into place measures that add friction to the processes people use to access tools and make harmful content. Hugging Face can be a vocal proponent of OpenRAIL licenses, which make users commit to using the models in a certain way. The corporate also allows people to mechanically integrate provenance data that meets high technical standards into their workflow. 

Other popular solutions include higher watermarking and content provenance techniques, which might help with detection. But these detection tools are not any silver bullet. 

Rules that require all AI-generated content to be watermarked are not possible to implement, and it’s also highly possible that watermarks could find yourself doing the alternative of what they’re imagined to do, Leufer says. For one thing, in open-source systems, watermarking and provenance techniques will be removed by bad actors. It’s because everyone has access to the model’s source code, so specific users can simply remove any techniques they don’t want.

If only the largest firms or hottest proprietary platforms offer watermarks on their AI-generated content, then the absence of a watermark could come to suggest that content is just not AI generated, says Leufer. 

“Enforcing watermarking on all of the content which you could implement it on would actually lend credibility to probably the most harmful stuff that’s coming from the systems that we are able to’t intervene in,” he says. 

I asked Leufer if there are any promising approaches he sees on the market that give him hope. He paused to think and at last suggested taking a look at the larger picture. Deepfakes are only one other symptom of the issues we’ve had with information and disinformation on social media, he said: “This might be the thing that suggestions the scales to essentially do something about regulating these platforms and drives a push to essentially allow for public understanding and transparency.” 


Now read the remaining of The Algorithm

Deeper Learning

Watch this robot because it learns to stitch up wounds

An AI-trained surgical robot that could make a number of stitches by itself is a small step toward systems that may aid surgeons with such repetitive tasks. A video taken by researchers on the University of California, Berkeley, shows the two-armed robot completing six stitches in a row on a straightforward wound in imitation skin, passing the needle through the tissue and from one robotic arm to the opposite while maintaining tension on the thread. 

A helping hand: Though many doctors today get help from robots for procedures starting from hernia repairs to coronary bypasses, those are used to help surgeons, not replace them. This latest research marks progress toward robots that may operate more autonomously on very intricate, complicated tasks like suturing. The teachings learned in its development may be useful in other fields of robotics. Read more from James O’Donnell here. 

Bits and Bytes

Wikimedia’s CTO: Within the age of AI, human contributors still matter
Selena Deckelmann argues that on this era of machine-generated content, Wikipedia becomes much more worthwhile. (MIT Technology Review) 

Air Canada has to honor a refund policy its chatbot made up
The airline was forced to supply a customer a partial refund after its customer support chatbot inaccurately explained the corporate’s bereavement travel policy. Expect more cases like this so long as the tech sector sells chatbots that also make things up and have security flaws. (Wired)

Reddit has a brand new AI training deal to sell user content
The corporate has struck a $60 million deal to offer an unnamed AI company access to the user-created content on its platform. OpenAI and Apple have reportedly been knocking on publishers’ doors attempting to strike similar deals. Reddit’s human-written content is a gold mine for AI firms on the lookout for high-quality training data for his or her language models. (Bloomberg) 

Google pauses Gemini’s ability to generate AI images of individuals after diversity errors
It’s no surprise that AI models are biased. I’ve written about how they’re outright racist. But  Google’s effort to make its model more inclusive backfired after the model flat-out refused to generate images of white people. (The Verge) 

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users
Last week, a bug made the favored chatbot produce bizarre and random responses to user queries. (Ars Technica) 

LEAVE A REPLY

Please enter your comment!
Please enter your name here