Loading Events

DAO-ISEM-IORA Seminar Series -Francis de Véricourt

February 6 @ 10:00 AM - 11:30 AM
Name of Speaker
Francis de Véricourt
Schedule 

6 Feb 2026, 10am – 11.30am

 (60 min talk + 30 min Q&A)

Venue 
HSS 4-2

Link to register

(via Zoom)

Title
Beyond the Black Box: Unraveling the Role of Explainability in Human-AI Collaboration
Abstract 
Explainable Artificial Intelligence (AI) models have been proposed to mitigate overreliance and underreliance on AI, which reduce the effectiveness of human-AI collaborative tools. Yet, empirical evidence is mixed, and the impact of explainable AI on a decision-maker (DM)’s cognitive load and fatigue is often ignored. This paper offers a theoretical perspective on these issues. We develop an analytical model that incorporates the defining features of human and machine intelligence, capturing the limited but flexible nature of human cognition with imperfect machine recommendations. Crucially, we represent how AI-based explanations influence the DM’s belief in the algorithm’s predictive quality. Our results indicate that explainable AI has varying effects depending on the level of explainability it provides. While low explainability levels have no impact on decision accuracy and reliance levels, they lessen the cognitive burden of the DM. In contrast, higher explainability levels enhance accuracy by improving overreliance but at the expense of increased underreliance. Further, the relative impact of explainability (c.f. a black-box system) is higher when the DM is more cognitively constrained, the decision task is sufficiently complex or when the stakes are lower. Importantly, higher explainability levels can escalate the DM’s cognitive burden and hence overall processing time and fatigue, precisely when explanations are most needed, i.e. when the DM is pressed for time to complete a complex task and doubts the machine’s quality. Our study elicits comprehensive effects of explainability on decision outcomes and cognitive effort, enhancing our understanding of designing effective human-AI systems in diverse decision-making environments.
About the Speaker
Francis de Véricourt is Professor of Management Science and the founding Academic Director of the Institute for Deep Tech Innovation (DEEP) at ESMT Berlin. He also holds the Joachim Faber Chair in Business and Technology, and is the co-author of Framers, a Penguin Random House book listed on Financial Times’ Best Books. He lived and worked in France, USA, Germany and Singapore.
Francis was the first Associate Dean of Research and holder of the President’s Chair at ESMT Berlin. He held faculty positions at Duke University and INSEAD, where he was the Paul Dubrule Chaired professor in Sustainable Development, and was a post-doctoral researcher at Massachusetts Institute of Technology (MIT).  His general research interest is in the area of decision science, analytics and operations, with applications in health care, sustainability and human-AI interaction. He is the author of numerous academic articles in prominent management, analytics and economics journals such as Management Science, Operations Research, American Economics Review and others. He received several outstanding research awards and is currently an Area Editor at Operations Research.
Francis has been the recipient of many teaching awards for delivering classes to MBA and Executive MBA students at ESMT and INSEAD. He has extensive experience in executive education and corporate learning solutions, and is a regular speaker in academic and industry forums.

Categories: