Home Community This AI Paper Studies the Impact of Anonymization for Training Computer Vision Models with a Give attention to Autonomous Vehicles Datasets

This AI Paper Studies the Impact of Anonymization for Training Computer Vision Models with a Give attention to Autonomous Vehicles Datasets

0
This AI Paper Studies the Impact of Anonymization for Training Computer Vision Models with a Give attention to Autonomous Vehicles Datasets

Image anonymization is the practice of modifying or removing sensitive information from images to guard privacy. While necessary for complying with privacy regulations, anonymization often reduces data quality, which hampers computer vision development. Several challenges exist, comparable to data degradation, balancing privacy and utility, creating efficient algorithms, and negotiating moral and legal issues. An appropriate compromise should be achieved to secure privacy while improving computer vision research and applications.

Previous approaches to image anonymization include traditional methods comparable to blurring, masking, encryption, and clustering. Recent work focuses on realistic anonymization using generative models to switch identities. Nevertheless, many methods lack formal guarantees of anonymity, and other cues within the image can still reveal identity. Limited studies have explored the impact on computer vision models, with various effects depending on the duty. Public anonymized datasets are scarce.

In recent research, researchers from the Norwegian University of Science and Technology have directed their attention towards crucial computer vision tasks within the context of autonomous vehicles, specifically instance segmentation and human pose estimation. They’ve evaluated the performance of full-body and face anonymization models implemented in DeepPrivacy2, aiming to check the effectiveness of realistic anonymization approaches with conventional methods.

🚀 JOIN the fastest ML Subreddit Community

The steps proposed to evaluate the impact of anonymization by the article are as follows:

  • Anonymizing common computer vision datasets.
  • Training various models using anonymized data.
  • Evaluating the models on the unique validation datasets

The authors propose three full-body and face anonymization techniques: blurring, mask-out, and realistic anonymization. They define the anonymization region based on instance segmentation annotations. Traditional methods include masking out and Gaussian blur, while realistic anonymization uses pre-trained models from DeepPrivacy2. The authors also address global context issues in full-body synthesis through histogram equalization and latent optimization.

The authors conducted experiments to guage models trained on anonymized data using three datasets: COCO Pose Estimation, Cityscapes Instance Segmentation, and BDD100K Instance Segmentation. Face anonymization techniques showed no significant performance difference on Cityscapes and BDD100K datasets. Nevertheless, for COCO pose estimation, each mask-out and blurring techniques led to a major drop in performance as a consequence of the correlation between blurring/masking artifacts and the human body. Full-body anonymization, whether traditional or realistic, resulted in a decline in performance in comparison with the unique datasets. Realistic anonymization performed higher but still degraded the outcomes as a consequence of keypoint detection errors, synthesis limitations, and global context mismatch. The authors also explored the impact of model size and located that larger models performed worse for face anonymization on the COCO dataset. For full-body anonymization, each standard and multi-modal truncation methods improved performance.

To conclude, the study investigated the impact of anonymization on training computer vision models using autonomous vehicle datasets. Face anonymization had minimal effects on instance segmentation, while full-body anonymization significantly impaired performance. Realistic anonymization was superior to traditional methods but not a whole substitute for real data. Privacy protection without compromising model performance was highlighted. The study had limitations in annotation reliance and model architectures, calling for further research to enhance anonymization techniques and address synthesis limitations. Challenges in synthesizing human figures for anonymization in autonomous vehicles were also highlighted.

Check Out The Paper. Don’t forget to affix our 25k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more. If you’ve any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com


🚀 Check Out 100’s AI Tools in AI Tools Club


Mahmoud is a PhD researcher in machine learning. He also holds a
bachelor’s degree in physical science and a master’s degree in
telecommunications and networking systems. His current areas of
research concern computer vision, stock market prediction and deep
learning. He produced several scientific articles about person re-
identification and the study of the robustness and stability of deep
networks.


LEAVE A REPLY

Please enter your comment!
Please enter your name here