Anonymization is a critical problem within the context of face recognition and identification algorithms. With the increasing productization of those technologies, ethical concerns have emerged regarding the privacy and security of people. The flexibility to acknowledge and discover individuals through their facial expression raises questions on consent, control over personal data, and potential misuse. The present tagging systems in social networks must adequately address the issue of unwanted or unapproved faces appearing in photos.
Controversies and ethical concerns have marred the state-of-the-art in face recognition and identification algorithms. Previous systems lacked proper generalization and accuracy guarantees, resulting in unintended consequences. Counter-manipulation techniques similar to blurring and masking have been employed to show off face recognition, but they alter the image content and are easily detectable. Adversarial generation and confiscation methods have also been developed, but face recognition algorithms are improving to resist such attacks.
On this context, a brand new article recently published by a research team from Binghamton University proposes a privacy-enhancing system that leverages deepfakes to mislead face recognition systems without breaking image continuity. They introduce the concept of “My Face My Selection” (MFMC), where individuals can control which photos they seem in, replacing their faces with dissimilar deepfakes for unauthorized viewers.
The proposed method, MFMC, goals to create deepfake versions of photos with multiple people based on complex access rights granted by individuals in the image. The system operates in a social photo-sharing network, where access rights are defined per face fairly than per image. When a picture is uploaded, friends of the uploader could be tagged, while the remaining faces are replaced with deepfakes. These deepfakes are fastidiously chosen based on various metrics, ensuring they’re quantitatively dissimilar to the unique faces but maintain contextual and visual continuity. The authors conduct extensive evaluations using different datasets, deepfake generators, and face recognition approaches to confirm the effectiveness and quality of the proposed system. MFMC represents a major advancement in utilizing face embeddings to create useful deepfakes as a defense against face recognition algorithms.
The article shows the necessities of a deepfake generator that may transfer the identity of an artificial goal face to an original source face while preserving facial and environmental attributes. Authors integrate multiple deepfake generators, similar to Nirkin et al., FTGAN, FSGAN, and SimSwap, into their framework. Additionally they introduce three access models: Disclosure by Proxy, Disclosure by Explicit Authorization, and Access Rule Based Disclosure, to balance social media participation and individual privacy.
The evaluation of the MFMC system includes assessing the reduction in face recognition accuracy using seven state-of-the-art face recognition systems and comparing the outcomes with existing privacy-preserving face alteration methods, similar to CIAGAN and Deep Privacy. The evaluation demonstrates the effectiveness of MFMC in reducing face recognition accuracy. It highlights its superiority over other methods in system design, production systemization, and evaluation against face recognition systems.
In conclusion, the article presents the MFMC system as a novel approach to deal with the privacy concerns related to face recognition and identification algorithms. By leveraging deepfakes and access rights granted by individuals, MFMC allows users to regulate which photos they seem in, replacing their faces with dissimilar deepfakes for unauthorized viewers. The evaluation of MFMC demonstrates its effectiveness in reducing face recognition accuracy, surpassing existing privacy-preserving face alteration methods. This research represents a major step towards enhancing privacy within the era of face recognition technology and opens up possibilities for further advancements on this field.
Check Out the Paper. Don’t forget to hitch our 25k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you’ve any questions regarding the above article or if we missed anything, be happy to email us at Asif@marktechpost.com
Featured Tools:
🚀 Check Out 100’s AI Tools in AI Tools Club
Mahmoud is a PhD researcher in machine learning. He also holds a
bachelor’s degree in physical science and a master’s degree in
telecommunications and networking systems. His current areas of
research concern computer vision, stock market prediction and deep
learning. He produced several scientific articles about person re-
identification and the study of the robustness and stability of deep
networks.