BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IORA - Institute of Operations Research and Analytics - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://iora.nus.edu.sg
X-WR-CALDESC:Events for IORA - Institute of Operations Research and Analytics
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Asia/Singapore
BEGIN:STANDARD
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
TZNAME:+08
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Asia/Singapore:20260410T100000
DTEND;TZID=Asia/Singapore:20260410T113000
DTSTAMP:20260427T102158
CREATED:20260401T024941Z
LAST-MODIFIED:20260401T024941Z
UID:27574-1775815200-1775820600@iora.nus.edu.sg
SUMMARY:DAO-ISEM-IORA Seminar Series: Park Sinchaisri
DESCRIPTION:Name of Speaker\n\n\nPark Sinchaisri \n\n\n\n\nSchedule \n\n\n10 Apr 2026\, 10am – 11.30am \n (60 min talk + 30 min Q&A)\n\n\n\nVenue \n\n\nBIZ1 0204\n\n\n\nLink to register \n(via Zoom)\n\nhttps://nus-sg.zoom.us/meeting/register/oo0ElW4xSIu9BcdsAyKQ2A\n\n\n\n\nTitle\n\n\nAlgorithmic Advice\, Human Compliance\, and Learning\n\n\n\n\nAbstract \n\n\nProblem definition:Organizations increasingly deploy algorithmic tools to support complex operational decisions\,raising a practical design question: how should these tools be built when designers care not only about immediate performance\, butalso about preserving and building human skill that remains valuable when advice is unavailable\, imperfect\, or requires genuineoversight? We study how theprecisionof algorithmic advice shapes this trade-off.Methodology/results:We develop a stylized modelof advice-taking and learning. The model characterizes a reward-learning frontier: precise\, action-level advice is easier to implementand improves payoffs while available through higher compliance\, whereas broad\, strategic advice requires interpretation\, inducesgreater exploration\, and generates knowledge that is portable\, even when decision environments differ. We test the model’s predictionsin two online experiments in an electric-vehicle routing and charging task\, representing typical characteristics of sequential decisiontasks. Consistent with the theory\, precise numerical advice delivers the strongest gains during the advice phase\, whereas broaderadvice can yield more robust performance after advice is removed\, specifically if the new environment differs substantially\, butnot completely. We use inverse reinforcement learning to recover interpretable latent objective components from action traces\,distinguishing transient compliance from persistent internalization.Managerial implications:Our results provide design guidancefor advice systems that balance short-run operational efficiency with the development of long-run human capability. They also helpvalidate inverse reinforcement learning as an effective tool for estimating human behaviors in complex sequential tasks\n\n\n\n\nAbout the Speaker\n\n\nPark Sinchaisri is an Assistant Professor of Operations and IT Management at the Haas School of Business\, University of California\, Berkeley. His research draws on operations management\, economics\, machine learning\, and behavioral science to study human decision-making in complex environments\, design human-AI systems that improve decision-making\, and develop strategies for managing the future of work. His work has been published in Management Science and Manufacturing & Service Operations Management\, and has also appeared in leading human-computer interaction venues including CSCW. He received his PhD in Operations\, Information and Decisions and an AM in Statistics from the Wharton School of the University of Pennsylvania\, an SM in Computational Science and Engineering from MIT\, and an ScB in Computer Engineering and Applied Mathematics-Economics from Brown University. Originally from Bangkok\, Thailand\, he hopes his research can help address urban challenges and improve outcomes for marginalized workers.
URL:https://iora.nus.edu.sg/events/dao-isem-iora-seminar-series-park-sinchaisri/
CATEGORIES:IORA Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Singapore:20260417T100000
DTEND;TZID=Asia/Singapore:20260417T113000
DTSTAMP:20260427T102158
CREATED:20260421T132906Z
LAST-MODIFIED:20260421T132906Z
UID:27587-1776420000-1776425400@iora.nus.edu.sg
SUMMARY:DAO-ISEM-IORA Seminar Series: Kwok-Hao Lee
DESCRIPTION:Name of Speaker\n\n\nKwok-Hao Lee\n\n\n\n\nSchedule \n\n\n17 Apr 2026\, 10am – 11.30am \n (60 min talk + 30 min Q&A)\n\n\n\nVenue \n\n\nHSS 3-1\n\n\n\nLink to register \n(via Zoom)\n\nhttps://nus-sg.zoom.us/meeting/register/-DSUpgQWTeqmXSIHbsTGyA\n\n\n\n\nTitle\n\n\nTwo-Sided Markets Shaped By Platform-Guided Search\n\n\n\n\nAbstract \n\n\nWe investigate concerns that vertically integrated platforms like Amazon steer demand towards their own offers via algorithmic prominence\, potentially harming consumers. On Amazon\, for each product\, the Buybox prominence algorithm selects one seller to feature\, influencing which offers consumers consider. Using novel Amazon sales and Buybox (prominence) data\, we estimate a structural model capturing the effects of such algorithmic prominence on consumer choices\, seller pricing\, and entry. We find that the platform can indeed steer demand as 95% of consumers consider only the Buybox offer. The Buybox is highly price-elastic (−21)\, but skews towards Amazon’s own offers\, which are featured as frequently as observably similar offers priced 5% cheaper. Still\, as consumers prefer these offers\, this skew does not amount to self-preferencing in the sense of harming consumers: consumer surplus is roughly maximized at the estimated Amazon Buybox advantage\, which balances higher prices against showing consumers their preferred offers.\n\n\n\n\nAbout the Speaker\n\n\nLee Kwok Hao is an industrial organisation economist working at the intersection of digital markets and the smart city. He uses administrative and platform data to study how algorithms and policy rules govern search\, matching\, pricing\, and allocation\, with a focus on transportation systems and public housing. As an Assistant Professor (Presidential Young Professor) at the Department of Strategy & Policy at the National University of Singapore (NUS) Business School\, Kwok Hao has been a recipient of the Social Science and Humanities Research Fellowship under the Social Science Research Council of Singapore. Previously\, Kwok Hao was a Presidential Fellow at the NUS Business School\, during which he spent a postdoctoral stint at the Cowles Foundation at Yale University. He obtained my PhD from Princeton University after formative years at the University of Chicago and Washington University in St. Louis.
URL:https://iora.nus.edu.sg/events/dao-isem-iora-seminar-series-kwok-hao-lee/
CATEGORIES:IORA Seminar Series
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Asia/Singapore:20260424T100000
DTEND;TZID=Asia/Singapore:20260424T113000
DTSTAMP:20260427T102158
CREATED:20260421T133016Z
LAST-MODIFIED:20260421T133016Z
UID:27589-1777024800-1777030200@iora.nus.edu.sg
SUMMARY:DAO-ISEM-IORA Seminar Series: Yanwei Jia
DESCRIPTION:Name of Speaker\n\n\nYanwei Jia\n\n\n\n\nSchedule \n\n\n24 Apr 2026\, 10am – 11.30am \n (60 min talk + 30 min Q&A)\n\n\n\nVenue \n\n\nBIZ1-0302\n\n\n\nLink to register \n(via Zoom)\n\nhttps://nus-sg.zoom.us/meeting/register/QpOphmoURVCrbzjftZht4g\n\n\n\n\nTitle\n\n\nWhen to Quit a Venture: Normative Theory and Structural Identification of Decoupled Belief and Decision\n\n\n\n\nAbstract \n\n\nUnderstanding how agents learn and make decisions under uncertainty is a fundamental question in many fields\, with applications including real options\, R&D\, and entrepreneurial ventures. The conventional approach formulates this learning process as an optimal stopping problem within a Bayes framework\, assuming agents possess the cognitive sophistication to continuously update their beliefs based on statistical principles\, thereby rigidly locking their decisions to these updated beliefs and forcing a strict\, deterministic threshold rule. This paper develops a continuous-time reinforcement learning framework for sequential experimentation that formally separates beliefs from actions. By decoupling the evaluation and policy processes\, we provide a unifying framework that yields both normative benchmarks and flexible positive dynamics. Normatively\, using the workhorse Gaussian bandit model\, we prove that by properly tuning learning rates\, the framework achieves a logarithmic regret bound\, matching the efficiency of Bayesian rationality. Positively\, the decoupled policy generates distinct and testable predictions\, such as experience-driven\, path-dependent quitting dynamics\, even when the belief is consistent with its Bayesian counterpart. Crucially\, we prove the structural identifiability of these hidden learning dynamics. By utilizing the method of simulated moments\, we demonstrate how this framework can be structurally estimated directly from censored observational field data and extended to general jump-diffusion bandits.\n\n\n\n\nAbout the Speaker\n\n\nYanwei Jia is an assistant professor in the Department of Systems Engineering and Engineering Management at The Chinese University of Hong Kong. He obtained his Ph.D. degree from the National University of Singapore in 2020\, and B.Sc. from Tsinghua University in 2016. Prior to joining CUHK in 2023\, he was an associate research scientist and adjunct assistant professor in the Department of Industrial Engineering and Operations Research at Columbia University. His research interest falls broadly into financial decision-making problems and uses the structural approach to study the decision making and information aggregation mechanism.
URL:https://iora.nus.edu.sg/events/dao-isem-iora-seminar-series-yanwei-jia/
CATEGORIES:IORA Seminar Series
END:VEVENT
END:VCALENDAR