Looking for the right paths to use XAI in the judiciary : which branches of law need inherently interpretable machine learning models and why?

2024
journal article
conference proceedings
dc.abstract.enIn a legal context, it is often particularly important to be able to trace the reasons why a decision was made; therefore, there may be an intuition that explainability is extremely important in judicial support systems. However, the standard of explainability (understandability, transparency) that machine learning technologies used to assist judges should meet has not yet been described. Defining this standard is even more complicated because, when considering it, it is necessary to take into account not only the specifics of the legal context in general but also of individual branches of law (for example, criminal or civil law). In this paper, I consider which branches of law, due to their specificity, seem to require the use of the most algorithmically transparent - and thus inherently interpretable - methods. Juxtaposing three general levels of explainability (white boxes, black boxes with post hoc explainers, and full black boxes) with legal values, I consider the paths that the development of machine learning models supporting judicial reasoning should follow in order to be tailored to the specifics of each legal field.
dc.affiliationWydział Prawa i Administracji : Zakład Socjologii Prawa
dc.conferenceThe 2nd World Conference on eXplainable Artificial Intelligence
dc.conference.cityValletta
dc.conference.countryMalta
dc.conference.datefinish2024-07-19
dc.conference.datestart2024-07-17
dc.conference.seriesWorld Conference on eXplainable Artificial Intelligence
dc.conference.seriesshortcutxAI
dc.conference.seriesweblinkhttps://xaiworldconference.com/2024/
dc.conference.shortcutxAI 2024
dc.conference.weblinkhttps://xaiworldconference.com/2024/
dc.contributor.authorPorębski, Andrzej - 371234
dc.date.accession2024-10-24
dc.date.accessioned2024-10-29T11:14:55Z
dc.date.available2024-10-29T11:14:55Z
dc.date.issued2024
dc.date.openaccess0
dc.description.accesstimew momencie opublikowania
dc.description.additionalStreszcz. ang. s. 129. Bibliogr. s. 135-136
dc.description.conftypeinternational
dc.description.physical129-136
dc.description.versionostateczna wersja wydawcy
dc.description.volume3793
dc.identifier.issn1613-0073
dc.identifier.project2022/45/N/HS5/00871 , Narodowe Centrum Nauki
dc.identifier.urihttps://ruj.uj.edu.pl/handle/item/458267
dc.identifier.weblinkhttps://ceur-ws.org/Vol-3793/paper_17.pdf
dc.identifier.weblinkhttps://ceur-ws.org/Vol-3793/
dc.languageeng
dc.language.containereng
dc.rightsUdzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa
dc.rights.licenceCC-BY
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/legalcode.pl
dc.share.typeotwarte czasopismo
dc.source.integratorfalse
dc.subject.enright to a fair trail
dc.subject.enXAI in the judiciary
dc.subject.eninherent interpretability
dc.subject.endecision support systems
dc.subject.enAI & law
dc.subtypeConferenceProceedings
dc.titleLooking for the right paths to use XAI in the judiciary : which branches of law need inherently interpretable machine learning models and why?
dc.title.journalCEUR Workshop Proceedings
dc.title.volumeJoint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium
dc.typeJournalArticle
dspace.entity.typePublicationen
dc.abstract.en
In a legal context, it is often particularly important to be able to trace the reasons why a decision was made; therefore, there may be an intuition that explainability is extremely important in judicial support systems. However, the standard of explainability (understandability, transparency) that machine learning technologies used to assist judges should meet has not yet been described. Defining this standard is even more complicated because, when considering it, it is necessary to take into account not only the specifics of the legal context in general but also of individual branches of law (for example, criminal or civil law). In this paper, I consider which branches of law, due to their specificity, seem to require the use of the most algorithmically transparent - and thus inherently interpretable - methods. Juxtaposing three general levels of explainability (white boxes, black boxes with post hoc explainers, and full black boxes) with legal values, I consider the paths that the development of machine learning models supporting judicial reasoning should follow in order to be tailored to the specifics of each legal field.
dc.affiliation
Wydział Prawa i Administracji : Zakład Socjologii Prawa
dc.conference
The 2nd World Conference on eXplainable Artificial Intelligence
dc.conference.city
Valletta
dc.conference.country
Malta
dc.conference.datefinish
2024-07-19
dc.conference.datestart
2024-07-17
dc.conference.series
World Conference on eXplainable Artificial Intelligence
dc.conference.seriesshortcut
xAI
dc.conference.seriesweblink
https://xaiworldconference.com/2024/
dc.conference.shortcut
xAI 2024
dc.conference.weblink
https://xaiworldconference.com/2024/
dc.contributor.author
Porębski, Andrzej - 371234
dc.date.accession
2024-10-24
dc.date.accessioned
2024-10-29T11:14:55Z
dc.date.available
2024-10-29T11:14:55Z
dc.date.issued
2024
dc.date.openaccess
0
dc.description.accesstime
w momencie opublikowania
dc.description.additional
Streszcz. ang. s. 129. Bibliogr. s. 135-136
dc.description.conftype
international
dc.description.physical
129-136
dc.description.version
ostateczna wersja wydawcy
dc.description.volume
3793
dc.identifier.issn
1613-0073
dc.identifier.project
2022/45/N/HS5/00871 , Narodowe Centrum Nauki
dc.identifier.uri
https://ruj.uj.edu.pl/handle/item/458267
dc.identifier.weblink
https://ceur-ws.org/Vol-3793/paper_17.pdf
dc.identifier.weblink
https://ceur-ws.org/Vol-3793/
dc.language
eng
dc.language.container
eng
dc.rights
Udzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa
dc.rights.licence
CC-BY
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/legalcode.pl
dc.share.type
otwarte czasopismo
dc.source.integrator
false
dc.subject.en
right to a fair trail
dc.subject.en
XAI in the judiciary
dc.subject.en
inherent interpretability
dc.subject.en
decision support systems
dc.subject.en
AI & law
dc.subtype
ConferenceProceedings
dc.title
Looking for the right paths to use XAI in the judiciary : which branches of law need inherently interpretable machine learning models and why?
dc.title.journal
CEUR Workshop Proceedings
dc.title.volume
Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium
dc.type
JournalArticle
dspace.entity.typeen
Publication
Affiliations

* The migration of download and view statistics prior to the date of April 8, 2024 is in progress.

Views
87
Views per month
Views per city
Krakow
32
Bloomington
4
Buckhurst Hill
2
Bydgoszcz
1
Chicago
1
Ostrava
1
Oświęcim
1
Singapore
1
Utrecht
1
Wroclaw
1
Downloads
porebski_looking_for_the_right_paths_2024.pdf
124
Porebski_Looking_for_the_Right_Paths_to_Use_XAI_in_the_Judiciary.pdf
3