000 02135naa a2200289 a 4500
003 AR-LpUFIB
005 20250311170534.0
008 230201s2024 xx o 000 0 spa d
024 8 _aDIF-M8955
_b9183
_zDIF008217
040 _aAR-LpUFIB
_bspa
_cAR-LpUFIB
100 1 _aStanchi, Oscar
245 1 0 _aQuantitative evaluation of white & black box interpretability methods for image classification
300 _a1 archivo (1,37 MB)
500 _aFormato de archivo PDF. -- Este documento es producción intelectual de la Facultad de Informática - UNLP (Colección BIPA/Biblioteca)
520 _aThe field of interpretability in Deep Learning faces significant challenges due to the lack of standard metrics for systematically evaluating and comparing interpretability methods. The absence of quantifiable measures impedes practitioners ability to select the most suitable methods and models for their specific tasks. To address this issue, we propose the Pixel Erosion and Dilation Score, a novel metric designed to assess the robustness of model explanations. Our approach involves applying iterative erosion and dilation processes to heatmaps generated by various interpretability methods, thereby using them to hide and show the important regions of a image to the network, allowing for a coherent and interpretable evaluation of model decision-making processes. We conduct quantitative ablation tests using our metric on the ImageNet dataset with both VGG16 and ResNet18 models. The results reveal that our new measure provides a numerical and intuitive means for comparing interpretability methods and models, facilitating more informed decision-making for practitioner.
534 _aCongreso Argentino de Ciencias de la Computación (30mo : 2024 : La Plata, Argentina)
650 4 _aVISIÓN POR COMPUTADORA
653 _aaprendizaje profundo
700 1 _aRonchetti, Franco
700 1 _aDal Bianco, Pedro A.
700 1 _aRíos, Gastón Gustavo
700 1 _aHasperué, Waldo
700 1 _aPuig, Domenec
700 1 _aRashwan, Hatem
856 4 0 _uhttp://sedici.unlp.edu.ar/handle/10915/176288
942 _cCP
999 _c57986
_d57986