Daily Abstract Digest

[24.07.30 / CVPR24'] DiG-IN: Diffusion Guidance for Investigating Networks - Uncovering Classifier Differences Neuron Visualizations and Visual Counterfactual Explanations

Emos Yalp 2024. 7. 30. 11:31

https://openaccess.thecvf.com/content/CVPR2024/papers/Augustin_DiG-IN_Diffusion_Guidance_for_Investigating_Networks_-_Uncovering_Classifier_Differences_CVPR_2024_paper.pdf

Abstract

While deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e.g. via spurious features, call into question how reliably these classifiers work in the wild. Furthermore, for safety-critical tasks the black-box nature of their decisions is problematic, and explanations or at least methods which make decisions plausible are needed urgently. In this paper, we address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation. We analyze the decisions of image classifiers by visual counterfactual explanations (VCEs), detection of systematic mistakes by analyzing images where classifiers maximally disagree, and visualization of neurons and spurious features. In this way, we validate existing observations, e.g. the shape bias of adversarially robust models, as well as novel failure modes, e.g. systematic errors of zero-shot CLIP classifiers. Moreover, our VCEs outperform previous work while being more versatile.


  • Task: Network Investigation $($interpretation & explanation$)$
  • Problem Definition: unexpected failures in the wild, black-box nature
  • Approach: Diffusion guidance & neuron visualization