Konstantin Kirchheim: Self-Assessment of Visual Recognition Systems based on Attribution. Otto-von-Guericke-University Magdeburg, 2019.

Abstract

Convolutional Neural Networks achieve state of the art results in various visual recognition tasks like object classification and object detection. While CNNs perform surprisingly well, it is difficult to retrace why they arrive at a certain prediction. Additionally, they have been shown to be prone to certain errors. As CNN are increasingly deployed into physical systems - for example in self driving vehicles - undetected errors could result in catastrophic consequences. Approaches to prevent this include the usage of attribution based explanation methods to facilitate an understanding in the systems decision in hindsight, as well as the detection of recognition errors at runtime, called self-assessment. Some state-of-the-art self-assessment approaches aim to detect anomalies in the activation patterns of neurons in a CNN.
This work explores the usage of attribution based explanations for self-assessment of CNNs. We build multiple self-assessment models and evaluate their performance in various settings. In our experiments, we find that, while self-assessment based on attribution does not outperform self-assessment based on neural activity on its own, it always surpasses random guessing. Furthermore, we find that self-assessment models using neural activation patterns as well as neural attribution can in some cases outperform models which do not consider attribution patterns. Thus, we conclude that it might be possible to improve self-assessment models by including the explanation of the model into the assessment process.

BibTeX (Download)

@mastersthesis{Kirchheim2019,
title = {Self-Assessment of Visual Recognition Systems based on Attribution},
author = {Konstantin Kirchheim},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/01/MA_2019_KonstantinKirchheim.pdf},
year  = {2019},
date = {2019-12-09},
school = {Otto-von-Guericke-University Magdeburg},
abstract = {Convolutional Neural Networks achieve state of the art results in various visual recognition  tasks like object  classification and  object detection.   While CNNs perform surprisingly well, it is difficult to retrace why they arrive at a certain prediction.  Additionally, they have been shown to be prone to certain errors. As CNN are increasingly deployed into physical systems - for example in self driving vehicles - undetected errors could result in catastrophic consequences. Approaches to prevent this include the usage of attribution based explanation methods to facilitate an understanding in the systems decision in hindsight, as well  as  the  detection  of  recognition  errors  at  runtime,  called  self-assessment. Some state-of-the-art self-assessment approaches aim to detect anomalies in the activation patterns of neurons in a CNN. 
This  work  explores  the  usage  of  attribution  based  explanations  for  self-assessment  of  CNNs.   We  build  multiple  self-assessment  models  and  evaluate their performance in various settings.  In our experiments, we find that, while self-assessment based on attribution does not outperform self-assessment based on  neural  activity  on  its  own,  it  always  surpasses  random  guessing.   Furthermore,  we find that self-assessment models using neural activation patterns as well as neural attribution can in some cases outperform models which do not consider attribution patterns.  Thus, we conclude that it might be possible to improve self-assessment models by including the explanation of the model into the assessment process.},
keywords = {Novelty Detection, Open Set Recognition, Out-of-Distribution Detection, Outlier Detection, Recognition, Self-Assessment},
pubstate = {published},
tppubtype = {mastersthesis}
}