On the convergence rate issues of general Markov search for global minimum

2017
journal article
article
13
cris.lastimport.wos2024-04-10T02:29:23Z
dc.abstract.enThis paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f-values evaluated for generated points. Because such methods use poor information on f, the following problem may occur: the closer to the optimum, the harder to generate a “better” (in sense of the cost function) state. This paper explores this issue on theoretical basis. To do so the concept of lazy convergence for a globally convergent method is introduced: a globally convergent method is called lazy if the probability of generating a better state from one step to another goes to zero with time. Such issue is the cause of very undesired convergence properties. This paper shows when an optimization method has to be lazy and the presented general results cover, in particular, the class of simulated annealing algorithms and monotone random search. Furthermore, some attention is put on accelerated random search and evolution strategies.pl
dc.affiliationWydział Matematyki i Informatyki : Instytut Matematykipl
dc.contributor.authorTarłowski, Dawid - 107049 pl
dc.date.accessioned2017-12-01T11:10:23Z
dc.date.available2017-12-01T11:10:23Z
dc.date.issued2017pl
dc.date.openaccess0
dc.description.accesstimew momencie opublikowania
dc.description.number4pl
dc.description.physical869-888pl
dc.description.versionostateczna wersja wydawcy
dc.description.volume69pl
dc.identifier.doi10.1007/s10898-017-0544-7pl
dc.identifier.eissn1573-2916pl
dc.identifier.issn0925-5001pl
dc.identifier.projectDEC-2013/09/N/ST/04262pl
dc.identifier.urihttps://ruj.uj.edu.pl/xmlui/handle/item/47027
dc.languageengpl
dc.language.containerengpl
dc.rightsUdzielam licencji. Uznanie autorstwa 3.0 Polska*
dc.rights.licenceCC-BY
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/pl/legalcode*
dc.share.typeinne
dc.subject.englobal optimizationpl
dc.subject.enconvergence ratepl
dc.subject.enMarkov searchpl
dc.subject.ensimulated annealingpl
dc.subject.enaccelerated random searchpl
dc.subject.enself-adaptationpl
dc.subtypeArticlepl
dc.titleOn the convergence rate issues of general Markov search for global minimumpl
dc.title.journalJournal of Global Optimizationpl
dc.typeJournalArticlepl
dspace.entity.typePublication
cris.lastimport.wos
2024-04-10T02:29:23Z
dc.abstract.enpl
This paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f-values evaluated for generated points. Because such methods use poor information on f, the following problem may occur: the closer to the optimum, the harder to generate a “better” (in sense of the cost function) state. This paper explores this issue on theoretical basis. To do so the concept of lazy convergence for a globally convergent method is introduced: a globally convergent method is called lazy if the probability of generating a better state from one step to another goes to zero with time. Such issue is the cause of very undesired convergence properties. This paper shows when an optimization method has to be lazy and the presented general results cover, in particular, the class of simulated annealing algorithms and monotone random search. Furthermore, some attention is put on accelerated random search and evolution strategies.
dc.affiliationpl
Wydział Matematyki i Informatyki : Instytut Matematyki
dc.contributor.authorpl
Tarłowski, Dawid - 107049
dc.date.accessioned
2017-12-01T11:10:23Z
dc.date.available
2017-12-01T11:10:23Z
dc.date.issuedpl
2017
dc.date.openaccess
0
dc.description.accesstime
w momencie opublikowania
dc.description.numberpl
4
dc.description.physicalpl
869-888
dc.description.version
ostateczna wersja wydawcy
dc.description.volumepl
69
dc.identifier.doipl
10.1007/s10898-017-0544-7
dc.identifier.eissnpl
1573-2916
dc.identifier.issnpl
0925-5001
dc.identifier.projectpl
DEC-2013/09/N/ST/04262
dc.identifier.uri
https://ruj.uj.edu.pl/xmlui/handle/item/47027
dc.languagepl
eng
dc.language.containerpl
eng
dc.rights*
Udzielam licencji. Uznanie autorstwa 3.0 Polska
dc.rights.licence
CC-BY
dc.rights.uri*
http://creativecommons.org/licenses/by/3.0/pl/legalcode
dc.share.type
inne
dc.subject.enpl
global optimization
dc.subject.enpl
convergence rate
dc.subject.enpl
Markov search
dc.subject.enpl
simulated annealing
dc.subject.enpl
accelerated random search
dc.subject.enpl
self-adaptation
dc.subtypepl
Article
dc.titlepl
On the convergence rate issues of general Markov search for global minimum
dc.title.journalpl
Journal of Global Optimization
dc.typepl
JournalArticle
dspace.entity.type
Publication
Affiliations

* The migration of download and view statistics prior to the date of April 8, 2024 is in progress.

Views
0
Views per month
Downloads
On the convergence rate issues of general Markov search for global minimum.pdf
7