Michael Choi is currently an Assistant Professor at the Yale-NUS College, and by courtesy, at the Department of Statistics and Data Science at National University of Singapore (NUS). He is also affiliated with the Institute of Operations Research and Analytics (iORA). Prior to joining Yale-NUS, he was an Assistant Professor for three years in the School of Data Science at The Chinese University of Hong Kong, Shenzhen. He received his PhD in Operations Research from Cornell University in 2017, and his undergraduate degree in Actuarial Science (First Class Honours) from The University of Hong Kong in 2013.
His research interests centre around stochastic processes and their broad applications and intersections with other fields such as data science, with a particular focus on Markov chains theory and stochastic algorithms driven by Markov chains. He has published extensively in leading journals of his area, including Transactions of the American Mathematical Society, Stochastic Processes and their Applications, Combinatorics, Probability and Computing, and Electronic Communications in Probability.
Name of Speaker | Dr Michael Choi |
Schedule | Friday 24 September, 10am |
Link to Register | https://nus-sg.zoom.us/meeting/register/tZ0ofu6hqzkiGtFQQExJuuRXSZnKC2bA9CO_ |
Title | On the convergence of an improved and adaptive kinetic simulated annealing |
Abstract | Inspired by the work of [Fang et al.. An improved annealing method and its large-time behaviour. Stochastic Process. Appl. (1997), Volume 71 Issue 1 Page 55-74.], who propose an improved simulated annealing algorithm based on a variant of overdamped Langevin diffusion with state-dependent diffusion coefficient, we cast this idea in the kinetic setting and develop an improved kinetic simulated annealing (IKSA) method for minimizing a target function U. To analyze its convergence, we utilize the framework recently introduced by [Monmarché. Hypocoercivity in metastable settings and kinetic simulated annealing. Probab. Theory Related Fields (2018), Volume 172 Page 1215-1248.] for the case of kinetic simulated annealing (KSA). The core idea of IKSA rests on introducing a parameter c > inf U, which de facto modifies the optimization landscape and clips the critical height in IKSA at a maximum of c – inf U. Consequently IKSA enjoys improved convergence with faster logarithmic cooling than KSA. To tune the parameter c, we propose an adaptive method that we call IAKSA which utilizes the running minimum generated by the algorithm on the fly, thus avoiding the need to manually adjust c for better performance. We present positive numerical results on some standard global optimization benchmark functions that verify the improved convergence of IAKSA over other Langevin-based annealing methods. |