Fidelity Focused Variant of a rule extraction algorithm in artificial neural networks
Keywords:
Fidelity, Artificial Neural Networks, Explainable Artificial Intelligence, XIAAbstract
Neural networks have the ability to achieve high levels of accuracy in classification tasks, but their lack of explainability is a clear drawback and leads to call them ”black boxes”. In this paper, we present a modification of the RxREN algorithm that focuses on the explainability of neural networks by generating accurate and easily interpretable rules. The objective of this modification is to understand the neural network decision process, and for this purpose, the relationship between the level of abstraction and its fidelity is analyzed. An algorithm with three configurations was implemented in two different problems (Iris, WBC). It was analyzed how the abstraction level of the rules affects their fidelity, searching for accurate rules and evaluating the impact of the abstraction level. In conclusion, this study aims to significantly improve the fidelity of the rules generated by the algorithm, allowing users to better understand the classification process and also it is intended to highlight the importance of considering the level of abstraction when extracting interpretable and faithful rules.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Martín Moschettoni, Milagros Aylén Jacinto, Gabriela Pérez, Claudia Pons

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Acorde a estos términos, el material se puede compartir (copiar y redistribuir en cualquier medio o formato) y adaptar (remezclar, transformar y crear a partir del material otra obra), siempre que a) se cite la autoría y la fuente original de su publicación (revista y URL de la obra), b) no se use para fines comerciales y c) se mantengan los mismos términos de la licencia.