Dr. Swati Gupta is an Assistant Professor and Fouts Family Early Career Professor in the H. Milton Stewart School of Industrial & Systems Engineering, and School of Computer Science (by courtesy) at Georgia Institute of Technology. She received a Ph.D. in Operations Research from MIT in 2017 and a joint Bachelors and Masters in CS from IIT, Delhi in 2011. Dr. Gupta’s research interests are in optimization, machine learning and algorithmic fairness. Her work spans various application domains such as revenue management, energy and quantum computation. She received the NSF CISE Research Initiation Initiative (CRII) Award in 2019. She was also awarded the prestigious Simons-Berkeley Research Fellowship in 2017-2018, where she was selected as the Microsoft Research Fellow in 2018. Dr. Gupta received the Google Women in Engineering Award in India in 2011. Dr. Gupta’s research is partially funded by the NSF and DARPA.
Name of Speaker | Dr Swati Gupta |
Schedule | Friday 26 March 2021 , 10am |
Link | https://nus-sg.zoom.us/j/84784213927?pwd=eGs5U0Mwcm9XY1htcGlNQ3J5aEhCdz09 |
ID | 847 8421 3927 |
Password | 329485 |
Title | Mitigating the Impact of Bias in Selection Algorithms |
Abstract | The introduction of automation into the hiring process has put a spotlight on a persistent problem: discrimination in hiring on the basis of protected-class status. Left unchecked, algorithmic applicant-screening can exacerbate pre-existing societal inequalities and even introduce new sources of bias; if designed with bias-mitigation in mind, however, automated methods have the potential to produce fairer decisions than non-automated methods. In this work, we focus on selection algorithms used in the hiring process (e.g., resume-filtering algorithms) given access to a “biased evaluation metric”. That is, we assume that the method for numerically scoring applications is inaccurate in a way that adversely impacts certain demographic groups.
We analyze the classical online secretary algorithms under two models of bias or inaccuracy in evaluations: (i) first, we assume that the candidates belong to disjoint groups (e.g., race, gender, nationality, age), with unknown true utility Z, and “observed” utility Z/\beta for some unknown \beta that is group-dependent, (ii) second, we propose a “poset” model of bias, wherein certain pairs of candidates can be declared incomparable. We show that in the biased setting, group-agnostic algorithms for online secretary problem are suboptimal, often causing starvation of jobs for groups with \beta>1. We bring in techniques from matroid secretary literature and order theory to develop group-aware algorithms that are able to achieve certain “fair” properties, while obtaining near-optimal competitive ratios for maximizing true utility of hired candidates in a variety of adversarial and stochastic settings. Keeping in mind the requirements of U.S. anti-discrimination law, however, certain group-aware interventions can be construed as illegal, and we will conclude the talk by partially addressing tensions with the law and ways to argue legal feasibility of our proposed interventions. This talk is based on work with Jad Salem and Deven R. Desai. |