Loading Events

This event has passed.

IORA Seminar Series – Hoi-To Wai

May 30 @ 10:00 AM - 11:30 AM

Hoi-To Wai received his PhD degree from Arizona State University (ASU) in Electrical Engineering in Fall 2017, B. Eng. (with First Class Honor) and M. Phil. degrees in Electronic Engineering from The Chinese University of Hong Kong (CUHK) in 2010 and 2012, respectively. He is an Assistant Professor in the Department of Systems Engineering & Engineering Management at CUHK. He has held research positions at ASU, UC Davis, Telecom ParisTech, Ecole Polytechnique, LIDS, MIT. Hoi-To’s research interests are in the broad area of signal processing, machine learning and distributed optimization with applications to network science. His dissertation has received the 2017’s Dean’s Dissertation Award from the Ira A. Fulton Schools of Engineering of ASU, and he is a recipient of a Best Student Paper Award at ICASSP 2018.

Name of Speaker Hoi-To Wai
Schedule 30 May 2022, 10am – 11.30am
Venue (face-to-face) I4-01-03 Seminar Room (next to the level 1 café)
Link to Register (Online) https://nus-sg.zoom.us/meeting/register/tZMsf-mpqj0tH9e76YMDTvL9xA1B20JT9uAD
Title of Talk Stochastic Approximation Schemes with Decision Dependent Data
Abstract Stochastic approximation (SA) is a key method which forms the backbone of many online algorithms relying on streaming data with applications to reinforcement and statistical learning. This talk considers a setting in which the streaming data is not i.i.d., but is correlated and decision dependent. First, we analyze a general SA scheme that indirectly minimizes a smooth but possibly non-convex objective function. We consider an update procedure whose drift term depends on a decision dependent Markov chain and the mean field is not necessarily a gradient map, leading to asymptotic bias for the one-step updates. We analyze the expected non-asymptotic convergence rate for such general scheme and llustrate this setting with the policy-gradient method for average reward maximization. Second, we consider extensions of the SA scheme and its analysis. For bi-level optimization via two timescale SA, we present the non-asymptotic complexity analysis and demonstrate an application to natural actor-critic. For performative prediction with stateful users, we illustrate that the SGD algorithm in strategical classification can be interpreted as an SA scheme with decision dependent data, and we present recent results on its expected convergence rate towards a performative stable solution.

Categories: