Simple view
Full metadata view
Authors
Statistics
Looking for the right paths to use XAI in the judiciary : which branches of law need inherently interpretable machine learning models and why?
right to a fair trail
XAI in the judiciary
inherent interpretability
decision support systems
AI & law
Streszcz. ang. s. 129. Bibliogr. s. 135-136
In a legal context, it is often particularly important to be able to trace the reasons why a decision was made; therefore, there may be an intuition that explainability is extremely important in judicial support systems. However, the standard of explainability (understandability, transparency) that machine learning technologies used to assist judges should meet has not yet been described. Defining this standard is even more complicated because, when considering it, it is necessary to take into account not only the specifics of the legal context in general but also of individual branches of law (for example, criminal or civil law). In this paper, I consider which branches of law, due to their specificity, seem to require the use of the most algorithmically transparent - and thus inherently interpretable - methods. Juxtaposing three general levels of explainability (white boxes, black boxes with post hoc explainers, and full black boxes) with legal values, I consider the paths that the development of machine learning models supporting judicial reasoning should follow in order to be tailored to the specifics of each legal field.
| dc.abstract.en | In a legal context, it is often particularly important to be able to trace the reasons why a decision was made; therefore, there may be an intuition that explainability is extremely important in judicial support systems. However, the standard of explainability (understandability, transparency) that machine learning technologies used to assist judges should meet has not yet been described. Defining this standard is even more complicated because, when considering it, it is necessary to take into account not only the specifics of the legal context in general but also of individual branches of law (for example, criminal or civil law). In this paper, I consider which branches of law, due to their specificity, seem to require the use of the most algorithmically transparent - and thus inherently interpretable - methods. Juxtaposing three general levels of explainability (white boxes, black boxes with post hoc explainers, and full black boxes) with legal values, I consider the paths that the development of machine learning models supporting judicial reasoning should follow in order to be tailored to the specifics of each legal field. | |
| dc.affiliation | Wydział Prawa i Administracji : Zakład Socjologii Prawa | |
| dc.conference | The 2nd World Conference on eXplainable Artificial Intelligence | |
| dc.conference.city | Valletta | |
| dc.conference.country | Malta | |
| dc.conference.datefinish | 2024-07-19 | |
| dc.conference.datestart | 2024-07-17 | |
| dc.conference.series | World Conference on eXplainable Artificial Intelligence | |
| dc.conference.seriesshortcut | xAI | |
| dc.conference.seriesweblink | https://xaiworldconference.com/2024/ | |
| dc.conference.shortcut | xAI 2024 | |
| dc.conference.weblink | https://xaiworldconference.com/2024/ | |
| dc.contributor.author | Porębski, Andrzej - 371234 | |
| dc.date.accession | 2024-10-24 | |
| dc.date.accessioned | 2024-10-29T11:14:55Z | |
| dc.date.available | 2024-10-29T11:14:55Z | |
| dc.date.issued | 2024 | |
| dc.date.openaccess | 0 | |
| dc.description.accesstime | w momencie opublikowania | |
| dc.description.additional | Streszcz. ang. s. 129. Bibliogr. s. 135-136 | |
| dc.description.conftype | international | |
| dc.description.physical | 129-136 | |
| dc.description.version | ostateczna wersja wydawcy | |
| dc.description.volume | 3793 | |
| dc.identifier.issn | 1613-0073 | |
| dc.identifier.project | 2022/45/N/HS5/00871 , Narodowe Centrum Nauki | |
| dc.identifier.uri | https://ruj.uj.edu.pl/handle/item/458267 | |
| dc.identifier.weblink | https://ceur-ws.org/Vol-3793/paper_17.pdf | |
| dc.identifier.weblink | https://ceur-ws.org/Vol-3793/ | |
| dc.language | eng | |
| dc.language.container | eng | |
| dc.rights | Udzielam licencji. Uznanie autorstwa 4.0 Międzynarodowa | |
| dc.rights.licence | CC-BY | |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/legalcode.pl | |
| dc.share.type | otwarte czasopismo | |
| dc.source.integrator | false | |
| dc.subject.en | right to a fair trail | |
| dc.subject.en | XAI in the judiciary | |
| dc.subject.en | inherent interpretability | |
| dc.subject.en | decision support systems | |
| dc.subject.en | AI & law | |
| dc.subtype | ConferenceProceedings | |
| dc.title | Looking for the right paths to use XAI in the judiciary : which branches of law need inherently interpretable machine learning models and why? | |
| dc.title.journal | CEUR Workshop Proceedings | |
| dc.title.volume | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium | |
| dc.type | JournalArticle | |
| dspace.entity.type | Publication | en |