Home Learn This recent tool could protect your pictures from AI manipulation

This recent tool could protect your pictures from AI manipulation

0
This recent tool could protect your pictures from AI manipulation

Keep in mind that selfie you posted last week? There’s currently nothing stopping someone taking it and editing it using powerful generative AI systems. Even worse, due to the sophistication of those systems, it may be not possible to prove that the resulting image is fake. 

The excellent news is that a brand new tool, created by researchers at MIT, could prevent this 

The tool, called PhotoGuard, works like a protective shield by altering photos in tiny ways which are invisible to the human eye but prevent them from being manipulated. If someone tries to make use of an editing app based on a generative AI model comparable to Stable Diffusion to govern a picture that has been “immunized” by PhotoGuard, the result will look unrealistic or warped. 

Immediately, “anyone can take our image, modify it nonetheless they need, put us in very bad-looking situations, and blackmail us,” says Hadi Salman, a PhD researcher at MIT who contributed to the research. It was presented on the International Conference on Machine Learning this week. 

PhotoGuard is “an try to solve the issue of our images being manipulated maliciously by these models,” says Salman. The tool could, for instance, help prevent women’s selfies from being made into nonconsensual deepfake pornography.

The necessity to search out ways to detect and stop AI-powered manipulation has never been more urgent, because generative AI tools have made it quicker and easier to do than ever before. In a voluntary pledge with the White House, leading AI firms comparable to OpenAI, Google, and Meta committed to developing such methods in an effort to forestall fraud and deception. PhotoGuard is a complementary technique to a different one in all these techniques, watermarking: it goals to stop people from using AI tools to tamper with images to start with, whereas watermarking uses similar invisible signals to permit people to detect AI-generated content once it has been created.

The MIT team used two different techniques to stop images from being edited using the open-source image generation model Stable Diffusion. 

The primary technique is named an encoder attack. PhotoGuard adds imperceptible signals to the image in order that the AI model interprets it as something else. For instance, these signals could cause the AI to categorize a picture of, say, Trevor Noah as a block of pure gray. Consequently, any  try to use Stable Diffusion to edit Noah into other situations would look unconvincing. 

The second, more practical technique is named a diffusion attack. It disrupts the best way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model.  By adding these signals to a picture of Trevor Noah, the team managed to govern the diffusion model to disregard its prompt and generate the  image the researchers wanted. Consequently, any AI-edited images of Noah would just look gray. 

The work is “a great combination of a tangible need for something with what will be done without delay,” says Ben Zhao, a pc science professor on the University of Chicago, who developed an analogous protective method called Glaze that artists can use to forestall their work from being scraped into AI models. 

Tools like PhotoGuard change the economics and incentives for attackers by making it harder to make use of AI in malicious ways, says Emily Wenger, a research scientist at Meta, who also worked on Glaze and has developed methods to forestall facial recognition. 

“The upper the bar is, the less the people willing or capable of overcome it,” Wenger says. 

A challenge shall be to see how this system transfers to other models on the market, Zhao says. The researchers have published a demo online that permits people to immunize their very own photos, but for now it really works reliably only on Stable Diffusion. 

And while PhotoGuard may make it harder to tamper with recent pictures, it doesn’t provide complete protection against deepfakes, because users’ old images should be available for misuse, and there are other ways to supply deepfakes, says Valeriia Cherepanova, a PhD researcher on the University of Maryland who has developed techniques to guard social media users from facial recognition. 

In theory, people could apply this protective shield to their images before they upload them online, says Aleksander Madry, a professor at MIT who contributed to the research. But a more practical approach can be for tech firms so as to add it to photographs that individuals upload into their platforms robotically, he adds. 

It’s an arms race, nonetheless. While they’ve pledged to enhance protective methods, tech firms are still also developing recent, higher AI models at breakneck speed, and recent models might have the option to override any recent protections. 

The perfect scenario can be if the businesses developing AI models would also provide a way for people to immunize their images that works with every updated AI model, Salman says. 

Attempting to protect images from AI manipulation on the source is a rather more viable option than attempting to use unreliable methods to detect AI tampering, says Henry Ajder, an authority on generative AI and deepfakes. 

Any social media platform or AI company “must be fascinated by protecting users from being targeted by [nonconsensual] pornography or their faces being cloned to create defamatory content,” he says. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here