Saturday, December 4, 2021

Jota’s ML Advent Calendar – 04/December

 Today’s post is a short note on adversarial attacks for Computer Vision, prompted specifically by a new dataset and article on ArXiv from researchers at Scale AI/Allen Institute for AI/ML Collective - [2111.04204v1] Natural Adversarial Objects (arxiv.org), “Natural Adversarial Objects”. The word “Natural” comes from the fact that these 7934 photos were not artificially/intentionally created to cause problems in detection but were selected because they are mislabeled by 7 object detection models (including Yolov3). And “Objects” in the title refers to the fact that the analysis focuses on object detection scenarios – not image classification.

The authors then measured the mean average precision (mAP) against this dataset versus the MSCOCO dataset, and the difference in performance is huge (eg, 74.5% worse)!

The NAO dataset is available here, for anyone interested: Natural Adversarial Objects - Google Drive.

And continuing onto a related topic, what the above also shows is that when evaluating trained models, it isn’t enough to see how good a certain metric is (like mAP/F1/AUC), but also important to look at the distributions of the errors. And to look at this (surprise!) Microsoft actually has a Python package, namely the Error Analysis component of the Responsible AI Widgets, including capabilities to do a visual analysis/exploration of the errors. More information and sample notebooks are available here: https://github.com/microsoft/responsible-ai-toolbox/blob/main/docs/erroranalysis-dashboard-README.md .

No comments:

Post a Comment