Simple view
Full metadata view
Authors
Statistics
TSProto : fusing deep feature extraction with interpretable glass-box surrogate model for explainable time-series classification
explainable artificial intelligence
time-series
neurosymbolic
deep neural networks
Deep neural networks (DNNs) are highly effective at extracting features from complex data types, such as images and text, but often function as black-box models, making interpretation difficult. We propose TSProto – a model-agnostic approach that goes beyond standard XAI methods focused on feature importance, clustering important segments into conceptual prototypes—high-level, human-interpretable units. This approach not only enhances transparency but also avoids issues seen with surrogate models, such as the Rashomon effect, enabling more direct insights into DNN behavior. Our method involves two phases: (1) using feature attribution tools (e.g., SHAP, LIME) to highlight regions of model importance, and (2) fusion of these regions into prototypes with contextual information to form meaningful concepts. These concepts then integrate into an interpretable decision tree, making DNNs more accessible for expert analysis. We benchmark our solution on 61 publicly available datasets, where it outperforms other state-of-the-art prototype-based methods and glassbox models by an average of 10% in the F1 metric. Additionally, we demonstrate its practical applicability in a real-life anomaly detection case. The results from the user evaluation, conducted with 17 experts recruited from leading European research teams and industrial partners, also indicate a positive reception among experts in XAI and the industry. Our implementation is available as an open-source Python package on GitHub and PyPi.
dc.abstract.en | Deep neural networks (DNNs) are highly effective at extracting features from complex data types, such as images and text, but often function as black-box models, making interpretation difficult. We propose TSProto – a model-agnostic approach that goes beyond standard XAI methods focused on feature importance, clustering important segments into conceptual prototypes—high-level, human-interpretable units. This approach not only enhances transparency but also avoids issues seen with surrogate models, such as the Rashomon effect, enabling more direct insights into DNN behavior. Our method involves two phases: (1) using feature attribution tools (e.g., SHAP, LIME) to highlight regions of model importance, and (2) fusion of these regions into prototypes with contextual information to form meaningful concepts. These concepts then integrate into an interpretable decision tree, making DNNs more accessible for expert analysis. We benchmark our solution on 61 publicly available datasets, where it outperforms other state-of-the-art prototype-based methods and glassbox models by an average of 10% in the F1 metric. Additionally, we demonstrate its practical applicability in a real-life anomaly detection case. The results from the user evaluation, conducted with 17 experts recruited from leading European research teams and industrial partners, also indicate a positive reception among experts in XAI and the industry. Our implementation is available as an open-source Python package on GitHub and PyPi. | |
dc.affiliation | Wydział Fizyki, Astronomii i Informatyki Stosowanej : Instytut Informatyki Stosowanej | |
dc.contributor.author | Bobek, Szymon - 428058 | |
dc.contributor.author | Nalepa, Grzegorz - 200414 | |
dc.date.accessioned | 2025-06-13T14:41:41Z | |
dc.date.available | 2025-06-13T14:41:41Z | |
dc.date.createdat | 2025-06-10T08:42:49Z | en |
dc.date.issued | 2025 | |
dc.date.openaccess | 0 | |
dc.description.accesstime | w momencie opublikowania | |
dc.description.version | ostateczna wersja wydawcy | |
dc.description.volume | 124 | |
dc.identifier.articleid | 103357 | |
dc.identifier.doi | 10.1016/j.inffus.2025.103357 | |
dc.identifier.issn | 1566-2535 | |
dc.identifier.project | DRC IA | |
dc.identifier.uri | https://ruj.uj.edu.pl/handle/item/553325 | |
dc.language | eng | |
dc.language.container | eng | |
dc.rights | Udzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa | |
dc.rights.licence | CC-BY | |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/legalcode.pl | |
dc.share.type | inne | |
dc.subject.en | explainable artificial intelligence | |
dc.subject.en | time-series | |
dc.subject.en | neurosymbolic | |
dc.subject.en | deep neural networks | |
dc.subtype | Article | |
dc.title | TSProto : fusing deep feature extraction with interpretable glass-box surrogate model for explainable time-series classification | |
dc.title.journal | Information Fusion | |
dc.type | JournalArticle | |
dspace.entity.type | Publication | en |