TSProto : fusing deep feature extraction with interpretable glass-box surrogate model for explainable time-series classification

2025
journal article
article
dc.abstract.enDeep neural networks (DNNs) are highly effective at extracting features from complex data types, such as images and text, but often function as black-box models, making interpretation difficult. We propose TSProto – a model-agnostic approach that goes beyond standard XAI methods focused on feature importance, clustering important segments into conceptual prototypes—high-level, human-interpretable units. This approach not only enhances transparency but also avoids issues seen with surrogate models, such as the Rashomon effect, enabling more direct insights into DNN behavior. Our method involves two phases: (1) using feature attribution tools (e.g., SHAP, LIME) to highlight regions of model importance, and (2) fusion of these regions into prototypes with contextual information to form meaningful concepts. These concepts then integrate into an interpretable decision tree, making DNNs more accessible for expert analysis. We benchmark our solution on 61 publicly available datasets, where it outperforms other state-of-the-art prototype-based methods and glassbox models by an average of 10% in the F1 metric. Additionally, we demonstrate its practical applicability in a real-life anomaly detection case. The results from the user evaluation, conducted with 17 experts recruited from leading European research teams and industrial partners, also indicate a positive reception among experts in XAI and the industry. Our implementation is available as an open-source Python package on GitHub and PyPi.
dc.affiliationWydział Fizyki, Astronomii i Informatyki Stosowanej : Instytut Informatyki Stosowanej
dc.contributor.authorBobek, Szymon - 428058
dc.contributor.authorNalepa, Grzegorz - 200414
dc.date.accessioned2025-06-13T14:41:41Z
dc.date.available2025-06-13T14:41:41Z
dc.date.createdat2025-06-10T08:42:49Zen
dc.date.issued2025
dc.date.openaccess0
dc.description.accesstimew momencie opublikowania
dc.description.versionostateczna wersja wydawcy
dc.description.volume124
dc.identifier.articleid103357
dc.identifier.doi10.1016/j.inffus.2025.103357
dc.identifier.issn1566-2535
dc.identifier.projectDRC IA
dc.identifier.urihttps://ruj.uj.edu.pl/handle/item/553325
dc.languageeng
dc.language.containereng
dc.rightsUdzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa
dc.rights.licenceCC-BY
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/legalcode.pl
dc.share.typeinne
dc.subject.enexplainable artificial intelligence
dc.subject.entime-series
dc.subject.enneurosymbolic
dc.subject.endeep neural networks
dc.subtypeArticle
dc.titleTSProto : fusing deep feature extraction with interpretable glass-box surrogate model for explainable time-series classification
dc.title.journalInformation Fusion
dc.typeJournalArticle
dspace.entity.typePublicationen
dc.abstract.en
Deep neural networks (DNNs) are highly effective at extracting features from complex data types, such as images and text, but often function as black-box models, making interpretation difficult. We propose TSProto – a model-agnostic approach that goes beyond standard XAI methods focused on feature importance, clustering important segments into conceptual prototypes—high-level, human-interpretable units. This approach not only enhances transparency but also avoids issues seen with surrogate models, such as the Rashomon effect, enabling more direct insights into DNN behavior. Our method involves two phases: (1) using feature attribution tools (e.g., SHAP, LIME) to highlight regions of model importance, and (2) fusion of these regions into prototypes with contextual information to form meaningful concepts. These concepts then integrate into an interpretable decision tree, making DNNs more accessible for expert analysis. We benchmark our solution on 61 publicly available datasets, where it outperforms other state-of-the-art prototype-based methods and glassbox models by an average of 10% in the F1 metric. Additionally, we demonstrate its practical applicability in a real-life anomaly detection case. The results from the user evaluation, conducted with 17 experts recruited from leading European research teams and industrial partners, also indicate a positive reception among experts in XAI and the industry. Our implementation is available as an open-source Python package on GitHub and PyPi.
dc.affiliation
Wydział Fizyki, Astronomii i Informatyki Stosowanej : Instytut Informatyki Stosowanej
dc.contributor.author
Bobek, Szymon - 428058
dc.contributor.author
Nalepa, Grzegorz - 200414
dc.date.accessioned
2025-06-13T14:41:41Z
dc.date.available
2025-06-13T14:41:41Z
dc.date.createdaten
2025-06-10T08:42:49Z
dc.date.issued
2025
dc.date.openaccess
0
dc.description.accesstime
w momencie opublikowania
dc.description.version
ostateczna wersja wydawcy
dc.description.volume
124
dc.identifier.articleid
103357
dc.identifier.doi
10.1016/j.inffus.2025.103357
dc.identifier.issn
1566-2535
dc.identifier.project
DRC IA
dc.identifier.uri
https://ruj.uj.edu.pl/handle/item/553325
dc.language
eng
dc.language.container
eng
dc.rights
Udzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa
dc.rights.licence
CC-BY
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/legalcode.pl
dc.share.type
inne
dc.subject.en
explainable artificial intelligence
dc.subject.en
time-series
dc.subject.en
neurosymbolic
dc.subject.en
deep neural networks
dc.subtype
Article
dc.title
TSProto : fusing deep feature extraction with interpretable glass-box surrogate model for explainable time-series classification
dc.title.journal
Information Fusion
dc.type
JournalArticle
dspace.entity.typeen
Publication
Affiliations

* The migration of download and view statistics prior to the date of April 8, 2024 is in progress.

Views
8
Views per month
Views per city
Krakow
4
Sanka
1
Warsaw
1