Simple view
Full metadata view
Authors
Statistics
Hebbian continual representation learning
interpretable neural network
continual learning
unsupervised learning
hebbian learning
Continual Learning aims to bring machine learning into a more realistic scenario, where tasks are learned sequentially and the i.i.d. assumption is not preserved. Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks. To reduce this performance gap, we investigate the question whether biologically inspired Hebbian learning is useful for tackling continual challenges. In particular, we highlight a realistic and often overlooked unsupervised setting, where the learner has to build representations without any supervision. By combining sparse neural networks with Hebbian learning principle, we build a simple yet effective alternative (HebbCL) to typical neural network models trained via the gradient descent. Due to Hebbian learning, the network have easily interpretable weights, which might be essential in critical application such as security or healthcare. We demonstrate the efficacy of HebbCL in an unsupervised learning setting applied to MNIST and Omniglot datasets. We also adapt the algorithm to the supervised scenario and obtain promising results in the class-incremental learning.
dc.abstract.en | Continual Learning aims to bring machine learning into a more realistic scenario, where tasks are learned sequentially and the i.i.d. assumption is not preserved. Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks. To reduce this performance gap, we investigate the question whether biologically inspired Hebbian learning is useful for tackling continual challenges. In particular, we highlight a realistic and often overlooked unsupervised setting, where the learner has to build representations without any supervision. By combining sparse neural networks with Hebbian learning principle, we build a simple yet effective alternative (HebbCL) to typical neural network models trained via the gradient descent. Due to Hebbian learning, the network have easily interpretable weights, which might be essential in critical application such as security or healthcare. We demonstrate the efficacy of HebbCL in an unsupervised learning setting applied to MNIST and Omniglot datasets. We also adapt the algorithm to the supervised scenario and obtain promising results in the class-incremental learning. | pl |
dc.affiliation | Wydział Matematyki i Informatyki : Instytut Informatyki i Matematyki Komputerowej | pl |
dc.affiliation | Szkoła Doktorska Nauk Ścisłych i Przyrodniczych | pl |
dc.conference | 56th Annual Hawaii International Conference on System Sciences | pl |
dc.conference.city | Maui | |
dc.conference.country | Hawaje, Stany Zjednoczone | |
dc.conference.datefinish | 2023-01-06 | |
dc.conference.datestart | 2023-01-03 | |
dc.conference.shortcut | HICSS | |
dc.contributor.author | Morawiecki, Pawel | pl |
dc.contributor.author | Krutsylo, Andrii | pl |
dc.contributor.author | Wołczyk, Maciej - 247731 | pl |
dc.contributor.author | Śmieja, Marek - 135996 | pl |
dc.contributor.editor | Bui, Tung X. | pl |
dc.date.accession | 2023-01-23 | pl |
dc.date.accessioned | 2023-02-20T15:54:42Z | |
dc.date.available | 2023-02-20T15:54:42Z | |
dc.date.issued | 2023 | pl |
dc.date.openaccess | 1 | |
dc.description.accesstime | po opublikowaniu | |
dc.description.conftype | international | pl |
dc.description.physical | 1259-1268 | pl |
dc.description.version | ostateczna wersja wydawcy | |
dc.identifier.bookweblink | https://hdl.handle.net/10125/102785 | pl |
dc.identifier.eisbn | 2572-6862 | pl |
dc.identifier.isbn | 978-0-9981331-6-4 | pl |
dc.identifier.uri | https://ruj.uj.edu.pl/xmlui/handle/item/308035 | |
dc.identifier.weblink | https://hdl.handle.net/10125/102785 | pl |
dc.language | eng | pl |
dc.language.container | eng | pl |
dc.pubinfo | Honolulu : HI | pl |
dc.rights | Udzielam licencji. Uznanie autorstwa - Użycie niekomercyjne - Bez utworów zależnych 4.0 Międzynarodowa | * |
dc.rights.licence | CC-BY-NC-ND | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.pl | * |
dc.share.type | otwarte repozytorium | |
dc.subject.en | interpretable neural network | pl |
dc.subject.en | continual learning | pl |
dc.subject.en | unsupervised learning | pl |
dc.subject.en | hebbian learning | pl |
dc.subtype | ConferenceProceedings | pl |
dc.title | Hebbian continual representation learning | pl |
dc.title.container | Proceedings of the 56th Annual Hawaii International Conference on System Sciences, HICSS 2023, Hyatt Regency Maui, Hawaii, USA, 3-6 January 2023, Maui, Hawaii | pl |
dc.type | BookSection | pl |
dspace.entity.type | Publication |