Dimitris Bertsimas

Associate Dean of Business Analytics, Boeing Professor of Operations Research and faculty director of the Master of Business analytics at MIT

Dimitris Bertsimas is currently the Boeing Professor of Operations Research, the Associate Dean of Business Analytics at the Sloan School of Management, MIT. He received his SM and PhD in Applied Mathematics and Operations Research from MIT in 1987 and 1988 respectively. He has been with the MIT faculty since 1988. His research interests include optimization, machine learning   and their applications in health care. He has co-authored more than 250 scientific papers and five graduate level textbooks. He is the editor in Chief of INFORMS Journal of Optimization and former department editor in Optimization for Management Science and in Financial Engineering in Operations Research. He has supervised 79 doctoral students and he is currently supervising 25 others. He is a member of the National Academy of Engineering since 2005, an INFORMS fellow, and he has received numerous research and teaching awards including the John von Neumann theory prize for fundamental, sustained contributions to the  theory of  operations research and the management sciences  and the president’s award of INFORMS recognizing important contributions to the welfare of society,  both  in  2019.

The Delphi model for predicting the spread for COVID-19 and its impact on Johnson and Johnson development of a single dose vaccine

The COVID-19 pandemic has created unprecedented challenges worldwide. Strained healthcare providers make difficult decisions on patient triage, treatment and care management on a daily basis. Policy makers have imposed social distancing measures to slow the disease, at a steep economic price. We design analytical tools to support these decisions and combat the pandemic. Specifically, we developed the DELPHI epidemiological model to project the pandemic’s spread under a variety of government policies. The model is used in the website https://www.covidanalytics.io/ to make prediction for more than 120 countries and all US states on a daily basis. The DELPHI model has guided Johnson and Johnson Pharmaceuticals to select trial locations for its COVID-19 vaccines in order to speed up the trial process and accelerate the end of the pandemic.The model enabled Johnson and Johnson to have the only Phase III trial to date that is directly tested against important variants in Brazil and South Africa. The global applicability of the DELPHI model has also helped Johnson and Johnson to produce the most location diverse COVID-19 trial till date. It is continually included in the central CDC ensemble prediction model and have been utilized by many organizations worldwide to plan for the pandemic, including by the Hartford Hospital to plan for ICU capacity and by the Federal Reserve Bank of Philadelphia to assist in recommending future monetary stimulus. The innovations in the DELPHI model are also generally applicable to modeling future epidemics.

Prof. Dimitris begins his talk by sharing how he started researching on COVID-19. Nearly 14 months ago, when he had begun lockdown, the world knew that COVID is serious but did not suspect its severity. Dimitris, with the motivation of helping the society, started working with several of his students to use their expertise in analytics for battling COVID.

This talk is about their efforts.The Delphi model (a reminder of his Greek origin- the name comes from the Oracle of Delphi) is often used in financial applications. Here, they use it to predict the spread for COVID, guiding Johnson & Johnson on where they should prioritise vaccines, allocation of vaccines in mass distribution centres, They have collaborated with CDC and are helping the Govt. of India. They have also come up with clinical scores that are being used in South America, Europe and America.

DELPHI has traditionally been used for developing predictions in the areas of resource optimization, quarantine policies, etc. There are several challenges with forecasting for the COVID virus:

  • Data sparsity: Not enough data for a relatively new virus
  • Human interventions and the need to understand their effect
  • Several parameters which make the overall model quite sensitive

The first data came from publications that appeared in the beginning due to the China exposure. As the virus progressed and moved to Europe and the United States, we learnt more and more. Volume of the datasets kept increasing. The model that they develop is a compartmental epidemiological model which looks at the evolution of the virus in a population.

Epidemiology 101- A typical compartmental model has 4 parts: (S) Suspected, (E) Exposed, (I) Infected and (R) Recovered. It is a system of four non linear ordinary differential equations. The rate of change of susceptible people decreases w.r.t. α, the exposure rate. The infection rate γ is a critical parameter.

The way to work with these models is that some initial conditions are provided and predictions are made. Key difficulties for COVID is the under-detection of cases. Even though infected cases are measured, they are typically less than reality. What is reported is not exactly the truth. The other difficulty is the state of the virus in individuals- whether they have completely recovered or not.

To account for under-detection, the four states in the SEIR model is now augmented to have 11 states, where the undetected population has different characteristics. It has an even more complicated dynamics. The infection rate is not a constant. It is affected by measures taken by the government. This is why a time parameter is introduced that modulates the infection rate, i.e. γ γ(t). Effects like social distancing, stay at home, mask-wearing change the infection rate.

To model the government intervention, they make use of a parameterised arctan function which does the following:

Phase 1: Initially the response rate decreases when the awareness is low.

Phase 2: Sharp decline in infection rate as policies are taken to control the spread and the society becomes more aware.

Phase 3: The policy effect gets saturated.

Phase 4: Resurgence in cases due to the relaxation of social and governmental actions.

The parameters of the model are estimated from the data. Their website regularly updates the parameters for all countries worldwide. On a daily basis, the results are updated. Currently the infected numbers are decreasing due to the vaccination drives. One can test their model on the website to understand the effect of various policies like social gathering restrictions, closing of schools and businesses.

Audience: The curve that Dimitris has obtained is highly dependent on the culture of the region and how it is being implemented. Dimitris: Yes, the model is highly flexible. Asian countries like Singapore, Korea have had a very significant reduction in the infection rate. They wanted a flexible model that could capture different behaviour and cultures.

Audience: How do you verify the model?

Dimitris: Making predictions and comparing it with real-data is the only way you can verify the model. This model is one of the first four models adopted by CDC. It has made predictions worldwide that were quite accurate.

Around May, Dimitris was approached by Johnson & Johnson. They wanted to know where to prioritise clinical trials. They wanted an early prediction, nearly 3 months ahead, because it takes time to set up infrastructure/ medical booths on ground.

A very important aspect that Dimitris attributes success of the model to is the feedback of the public. Since the model was freely available to everyone, they got effective feedback and could improve their model estimates.

Audience:  (Jokingly) Takes into account the policies in blue vs red states?

Dimitris: Red states in the US barely observed any social restrictions or government rules, but because their method was fairly adaptive and flexible, it performed well across the America.

Audience: The performance of the model has been better over time, does it have to do with vaccination.

Dimitris: The model performs better because of public feedback and its exposure to the world.

Audience:  How do you take into account the rate of vaccination?

Dimitris: It is a parameter and it is inferred from the data. We do not hard-code it in the analytics. We let the data and model guide us.

Audience: Are all models based on differential equations?

Dimitris: Yes, it is a discretized multi-period model. Eventually it is a non-linear optimization model.

Different states would do different things. For instance, the Southern states like Florida, weren’t closing their economy. Others like New York had complete stay at home orders. Variety of policies are explored in the model.

Dimitris then moves on to address the importance of acting fast. According to the model, if a lockdown is ordered a week earlier, the infection rate will decrease by 80 percent. It is not true for all states, but if you look at states with the most number of deaths, then the model has worked. Asian countries fared well because they implemented lockdown early on.

Applications of the model: The most important application was in the area of vaccine development. The idea is to provide the population a placebo and the vaccine. For this, you would like to have enough incidence in an area, so that the group that took the vaccine reacts better than the group that took the placebo. It will not work if you go to places with lower incidences. For instance, they did not succeed with the Ebola vaccine because they went to places where Ebola hadn’t materialised.

The model predicted the vaccines to be tested in Brazil, South America and Peru. This has been a success. Johnson & Johnson is the only company that knew about the South African and Brazil variant because the incidence numbers from the model were very high. They had early evidence that the vaccine was reasonably addressing the disease. Dimitris calls this the biggest achievement of the model.

Audience:  Did you control for the testing rate? Testing rate is high in some countries.

Dimitris:     The model is able to capture it indirectly because you have already accounted for undetected cases. The infection rate varies with time and behaviour. The time varying aspect of the infection is the most important component of the model.

Audience:  If you are starting with the SEIR model, did you account for endogeneity in the model instead of just considering simple interactions between S and I component?

Dimitris:     One can do all sorts of sophisticated analysis, but time here is the main constraint. The purpose of the work was that the model should give appropriate signals in a small time frame.

Another application of the model: Hartford health centre, the largest health care system in Connecticut, made use of the model to determine the number of ICU beds required. Many deaths have occurred due to the overburdening of hospital capacity and unpreparedness. Hartford looked at data and they were successful.

Dimitris mentions of yet another application of the model. They provided advice to a large bank in South America regarding employee policies, i.e. trade-off between employees at work to serve the customers and sticking to government restrictions.

A final application is to locate mass vaccination sites. When vaccines were first approved, the logistics of their distribution was not fantastic. The model addresses the question of where to locate mass vaccination sites. For this, location parameters were added to the model. Vaccine effectiveness and vaccine budget were also provided as inputs to the model. An optimization model was built with discrete variables and the dynamics were modeled using the DELPHI model. The goal was to minimise the number of deaths in that location, the number of exposed people and also to minimise the distance between vaccination site to population centre. To replicate this for India would be challenging due to the high population and vastness of the country.

In one hand while the DELPHI model analyses the humanitarian aspects of the disease, they have also developed a model that tries to capture the economic aspects of the disease. Not so much for COVID in particular, but for other pandemics. The purpose is to understand the economic toll since lockdown can have severe economic implications. It is a far more detailed analysis that is based on machine learning rather than differential equations.

Dimitris shares a lesson that he has learnt. “ If you have the time, you can always do a sophisticated model, but the purpose was to make an approximation that works in real life. It was a combined effort and interaction of talented young researchers and the entire team.”

Audience:  Is there any way to estimate the reliability of the model?

Dimitris: The incidence predictions were the best evidence. The model could understand the new variants. The ability of the model has been proven by real-world experiences. The mean squared error relative to reality was small. That is our biggest success. This is a very different type of research, different from just applying methods on synthetic datasets.

Audience:  How do you incorporate robustness in the model?

Dimitris: We do it via estimation of the parameters. We know historical data until today in all parts of the world. We optimize in such a way that the deviation over time is not very significant. We do not allow large variations. This is not a traditional way of measuring robustness, but a practical way to control for robustness.

Audience: Any advice for us?

Dimitris: Don’t be fearful. The best way to build models is when you have real life situations and real data.

Audience: How was your experience in managing this entire team on an exciting but risky topic?

Dimitris: It is mainly driven by a desire to contribute. Everybody wanted to help.

Dimitris concludes with a message to PhD. students. He says that real life situations or datasets are the best way to do applied OR research even though thesis is usually academic.

Max Z.J. Shen

Vice-President and Pro-Vice-Chancellor (Research)
Chair Professor in Logistics and Supply Chain Management at The University of Hong Kong

Professor Shen obtained his PhD from Northwestern University, USA in 2000. He started his academic career as Assistant Professor at the University of Florida in the same year, and joined the University of California, Berkeley in 2004, where he rose through the academic ranks to become Chancellor’s Professor and Chair of the Department of Industrial Engineering and Operations Research and Professor of the Department of Civil and Environmental Engineering. He was also a Centre Director at the Tsinghua-Berkeley Institute in Shenzhen and an Honorary Professor at Tsinghua University, China. Professor Shen joined The University of Hong Kong in 2021. Internationally recognized as a top scholar in his field, Professor Shen is a Fellow of the Institute for Operations Research and the Management Sciences (INFORMS), the President-Elect of the Production and Operations Management Society (POMS), and a past President of the Society of Locational Analysis of INFORMS. His primary research interests are in the general area of integrated supply chain design and management, and practical mechanism design.

Analytics for Wildfire Management

With climate change, the already serious problem of forest fires is clearly becoming increasingly troublesome. This is happening in vast forest areas as well as in the transitional zone between urban and rural areas, with grave consequences to the population living in or near forests. Analytic tools have been developed to determine what resources need to be in place, such as airplanes, helicopters, crews, and equipment, to help suppress fire. Once a fire has started, simulation models of fire spread have been quite successful in predicting the direction of fire to support decisions on how to deploy such resources. Less emphasis has been given to preventive Landscape Design or Fuel Management, which leads to decisions on how to manage forests to minimize the impact of fires once they start. These decisions include harvesting, prescribed burnings, and others. This talk will present the different tools we have developed in this problem area. Our main effort is on the integration of prediction analysis of fire ignition and spread, and the decisions on landscape design. The techniques used include stochastic simulation, derivative-free optimization, machine learning, optimization, heuristics, and deep learning. We have applied these techniques in a preliminary way in Chile, Spain, and Canada with the aim of translating research efforts into practical applications.

Prof. Max Shen starts by saying that he would normally be working on supply chains but recently he got into research on wildfires, because he resides in California and has had many personal encounters with it. On doing a literature survey, he found that not many people in OR are working on this problem. His talk focuses on the prediction of wildfires, modelling its propagation, for instancespeed, decision support system to help prevent wildfires and also how to minimise the risk of the fire.

He first gives an introduction of wildfires by saying that on Earth, something is always burning. It can be a natural cause like lightning or man-made. He says that wildfires are nothing new. Although fires can be really bad for the environment and lead to large amounts of pollution, it is also essential in restoring the ecosystem by clearing away the dead and dying leaves growth. Plants have learnt to live with fire or have co-evolved with fire.

Wildfires are a feedback system. With rising temperatures, forests are drier and this is sufficient to ignite and spread wildfires, which in turn again raises the temperature. There is a need to stop the loop somewhere. They are highly inconvenient for our daily lives. In a first world country, people are worried about not getting power or internet access due to these wildfires.

Shen asks the following questions. How do we mitigate the social damage caused by the wildfires? Should we focus on suppression or wait for it to happen and then minimise the damage? An active research area called forest fuel management is usually the way forward. Given a fire breaks out, this strategy ensures that the damage will not be big.

How to use ML and analytics to study and predict wildfire occurrences? There are lots of traditional ML models which include several features. There are deep learning models as well which are not easy to explain to decision makers. We can build decision making models. OR is changing so much that we need to know how to work with data. In OR, we usually work with deterministic models and sometimes we come up with stochastic frameworks, but most of the times it is not enough. We need to come up with a new modeling framework that relies on domain knowledge, use features that optimization/OM people could exploit to our advantage.

The risk of wildfires can be characterised based on landscape attributes like features related to human activity, distance of neighbourhood to fire, concentration/density of houses, how big is the road, traffic, etc. There are so many features in a typical model, so we try to minimise it, via a PI method i.e. predictor importance. It is a measure to get rid of features that aren’t important. Their model shows that human activity is the most dominant. It also shows that areas that are close to the roads have a very high risk till 2 km, but risk becomes small after 2 km. Thus, when constructing highways, it is essential to have a buffer zone of atleast 2 km. Risk of fire is not that high if you have a native forest but we have several man made plantations. Max showed several partial dependence plots to illustrate the above interactions between the most relevant variables. He also spoke about Wildland-urban interface and how it is important to limit its expansion to mitigate the risk of fires.

Audience: Have you performed any causal inference study?

Max: Yes, you will get to see it in the upcoming slides.

Audience: Do you look at the scale of the damage?

Max: Yes, we consider mainly huge fires.

How do we understand the role of landscape in detecting fire? Satellite images are extracted and a deep learning model is used to learn from spatial patterns. Usually there are too many features in an image so we focus only on some important ones/factors, like dense urban areas etc. They generate an attention map that focuses only on these relevant areas and the interactions between different topologies and their connectivity. Their model is highly interpretable and can be applied to similar computer vision studies.

Simulating wildfires for decision making

Traditional simulators do not capture the stochasticity of fires. They have developed a simulator that can perform various algorithms and evaluate wildfire risk subject to conditions of uncertainty like weather etc. It has been designed for decision makers.

Wildfire Decision support system

The idea behind it is to protect the forest if fire breaks out. They use a network and model forests as nodes of the graph. The edges have several attributes, one being how quickly the fire can reach to different points. This is not the shortest distance but the shortest fire travelling time. They make use of this framework to understand the importance of a node. It is called downstream protection value, i.e. how much value can you gain by protecting a node i.

The objective here is to find the subset of connected nodes such that you maximise the total value, i.e. protect the most important/ valuable areas in the forest. A connected network is considered because it is crucial that the firefighter team be connected, they cannot jump from one fire site to the other. This makes the problem harder to solve. This problem can be seen as a variation of the maximum weighted connected graph.

To solve the above problem, they use a hybrid approach- a greedy heuristic warm start + MIP. The computation was very fast, they were able to solve all instances within one hour.

After a fire breaks out, how do you quickly contain the fire?

Max considers wildfire suppression as a cooperative stochastic game. You want to coordinate multiple agents (here firefighters) in an efficient way. The game is as follows: Fire spreads according to wind, land topography etc. Firefighters or agents draw containment lines to minimise the fire spread. Each agent can perform six actions: move up, down, stay, harvest, move right and left.

The big question is how would you assign credit to the action done by an agent, or how would you evaluate the influence of an agent? A centralised approach may not be feasible because you would need to evaluate utility functions very quickly for several agents. Max’s implementation of SubCOMA (Counter-factual multi agent) proves to be the best method in terms of convergence speed and stability.

Max concludes by saying that we should get our hands dirty and work on datasets instead of just theoretical problems, to use models for social good and save lives.

Milind Tambe

Gordon McKay Professor of Computer Science and Director of the Center for Research in Computation and Society (CRCS) at Harvard University

Milind Tambe is Gordon McKay Professor of Computer Science and Director of Center for Research in Computation and Society at Harvard University; concurrently, he is also Director “AI for Social Good” at Google Research India. He is a recipient of the IJCAI John McCarthy Award, ACM/SIGAI Autonomous Agents Research Award from AAMAS, AAAI Robert S Engelmore Memorial Lecture award,  INFORMS Wagner prize, Rist Prize of the Military Operations Research Society, Columbus Fellowship Foundation Homeland security award, over 25 best papers or honorable mentions at conferences such as AAMAS, AAAI, IJCAI and meritorious commendations from agencies such as the US Coast Guard and the Los Angeles Airport.  Prof. Tambe is a fellow of AAAI and ACM

AI for social impact : Results and Lessons from deployments for public health and conservation

With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. I focus on the problems of public health and conservation, and address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. I will present results from work around the globe in using AI for HIV prevention, Maternal and Child care interventions, TB prevention and COVID modeling, as well as for wildlife conservation. Achieving social impact in these domains often requires methodological advances. To that end, I will highlight key research advances in multiagent reasoning and learning, in particular in, computational game theory, restless bandits and influence maximization in social networks.In pushing this research agenda, our ultimate goal is to facilitate local communities and non-profits to directly benefit from advances in AI tools and techniques.

Prof. Milind and his team have been focused on AI multiagent system research for social impact. They mainly focused on areas of public health, conservation, and public safety and security. The central problem they tried to solve is how to optimize our limited intervention resource.

He first gave a summary of lessons they have learned:

(1) achieving social impact and AI innovation should go hand-in-hand;

(2) partnerships with communities, non-profit organizations, and government organizations are crucial in AI for social impact;

(3) Rather than just focusing on improving algorithms, they look at the entire data-to-deployment pipeline, including data collection, predictive model, prescriptive algorithm and field test.

 

To elaborate on these lessons, he first introduced their research on preventing HIV in youth experiencing homelessness. There are 6,000 youth sleep on streets in Los Angeles, and the rate of HIV in this population is 10 times the rate of normal population. Therefore, homeless shelters try to spread information about HIV in this population. But they cannot inform every one. Thus, they recruit peer leaders and educate them about HIV prevention. They expect peer leaders will spread the information via face-to-face interactions in the social network. This is the problem of influence maximization in social networks. There are some challenges when they try to solve this problem by the tradition Cascade model. The first challenge is the uncertainty of the propagation probability. They assume it follows a known distribution with parameters in a certain range, and then solve this problem by robust optimization. The second challenge is that there are no enough resources and spaces to recruit all peer leaders at the same time. So they have to recruit a few at time. Here uncertainty can occur in attendance. They use POMDP to address this issue. The last challenge to explore the structure of the network. It is not scalable to get the entire network. They use a sampling algorithm for better scalability. Pilot test and large scale experiment show that their algorithm is far more efficient than the traditional approach.

 

The second example is about health program adherence. In India, a woman dies in childbirth every 15 minutes, and 4 of 10 children are too thin or short. There are many NGOs aim to address these challenges. Prof Milind works with an NGO which makes a weekly 3-minute call to a new or expecting mother to improve the health of the mother and the baby. One big problem is that an significant fraction of women just drop out of the program over time. They aim to predict which beneficiaries are going to drop out in advance; before they drop out the staff can intervene on those women to persuade them stay in the program. Hence, they build a predictive model with high accuracy, and then run a controlled experiment. The result shows that their model can increase the rate of retention significantly. Another problem here is that the predictive model labels so many women are predicted to be high-risk. However, they can only call a part of them in a week. A natural question is that which women should they call. A similar problem also appears in Tuberculosis prevention. They use restless bandit to solve this problem. Each arm in their model is a POMDP with a binary latent state corresponding to adhering and not-adhering. To compute the model efficiently, they adopt collapsing exploitation. Numeric result shows that their algorithm is much faster than baseline method without loss of intervention benefit. In terms of new directions, Prof. Milind is looking at extending restless bandits to multiple action types, risk aware restless bandits, and learning policies via Index Q-Learning.

 

The last example is about agent-based simulation model for COVID-19. The range of tests that have entered the market with varying sensitivity and costs. When testing your entire student population, which one would you use would you use? In the first stage, in simulation, they assume that they can run both less sensitive and more sensitive test on the entire student population, and get back results instantaneously. Then the high sensitive test (i.e., PCR test) does better. However, PCR test results are not available instantaneously. They may take up to 24 hours to come back. They find that that delay in isolating infected individuals negates the usefulness of the PCR test. The number of infections are much higher with PCR test with 1 day delay. If we can only run the PCR test every 5 days instead of every 3 days because of cost, again the more sensitive test loses all its advantage. Hence, as a conclusion, rapid traffic turnaround time and frequency of test are more critical than sensitivity for COVID-19 surveillance.

 

Subsequently, Prof. Milind talks abut his research on conservation: projecting conservation areas (Green Security Games (IJCAI 2015)). In Queen Elizabeth National Park, there are many poachers, they put snares or traps to poach animals. We want to help rangers better find snares and save animals. One of the lessons in this work is to step out of the lab & into the field. In this way, we could understand what’s the problem and what’s the difficulty. We divide the park into 1km*1km square, some squares are more important for poachers. Then we can try to solve the problem as a game between rangers and poachers. We build a learning adversary response model to predict the probability of snare per 1km grid square when there is uncertainty in observations. We developed PAWS algorithm (AAMAS 2017) and that’s the first pilot in the field. We applied PAWS to predict high vs low risk areas in 3 National parks, 24 areas each and 6 months (ECML PKDD 2017, ICDE 2020). We deployed PAWS in Srepok wildlife sanctuary at Cambodia (ICDE 2020). By the help of PAWS, the number of snares/month rangers can find increase from 101 in 2018 to 521 in 2019 and further increase to 1000 snares found in 2021 March! Now PAWS goes global with smart platform!!! It protects wildlife 800 national parks around the globe.

 

There are 3 new directions in research on conversation.

 

Direction #1: integrating real-time “spot” information (IAAI 2018). Drone can be used to inform rangers but also deceptive signaling to indicate ranger is arriving. Hence, we designed exploiting informational advantage defender knows pure & mixed strategy (AAAI 2018, AAAI 2020, AAMAS 2021).

 

Direction #2: data scarce parks. In data-rich parks, we could build predictive models to plan patrols while in data-scarce parks, we conduct patrols to detect illegal activity and collect data to improve the predictive model, which means there is a trade-off between exploitation and exploration. We proposed Lizard algorithm and it exploits decomposability, smoothness and monotonicity (AAAI 2021).

 

Direction #3: tailor ML predictions to ultimate objective.

 

In conclusion, the future of AI for social impact (AI4SG or AI4SI) has following features:

1. Achieving social impact& AI innovation go hand in hand.

2. Empower non-profits to use AI tools; avoid being gatekeepers to AI4SI tech.

3. Data to deployment: not just improving algorithms, new AI4SI evaluation.

4. Important to step out of the lab and into the field.

5. Embrace interdisciplinary research – social work, conservation.

6. Lack of data is the norm, a feature; part of the project strategy.

Marc Dragon, CPIM

Managing Director, Reefknot Investments

Marc is the Managing Director of Reefknot Investments, a Global Venture Capital firm, focusing on Supply Chain and Logistics Technology start-ups.  Reefknot, backed by Temasek and Kuehne + Nagel, strives to help drive Supply Chain and Logistics industry transformation by identifying and actively supporting high growth technology start-ups seeking to transform the industry.

Throughout his 20+ years career, Marc has actively been involved in the Supply Chain/Logistics and Technology space, Global and Asian Industry Think Tanks, as well as being on Boards and in Advisory functions in multiple Global and Asian growth companies.

Marc has also previously served as the CEO for an Asia Pacific Supply Chain Technology firm, in Executive/Directorship capacities in MNCs such as IBM and Deloitte Consulting, as well as founded/co-founded several successful start-ups and a Global Supply Chain/Logistics Technology Think Tank.

He is Certified in Production & Inventory Management (CPIM) from The Association for Operations Management (APICS), holds an IT Infrastructure Library (ITIL) Foundation Certification, and graduated with an Honours Degree in Electrical & Electronic Engineering.

Analytics and Data Science as applied in the Supply Chain and Logistics industries

How industry applies different data science and AI methods in the areas of :

  • Supply Chain Visibility
  • Trade Facilitation
  • eCommerce Fulfilment

Nicole Immorlica

Senior Principal Researcher at Microsoft Research

Nicole’s research lies broadly within the field of economics and computation. Using tools and modeling concepts from both theoretical computer science and economics, Nicole hopes to explain, predict, and shape behavioral patterns in various online and offline systems, markets, and games. Her areas of specialty include social networks and mechanism design.

Nicole received her Ph.D. from MIT in Cambridge, MA in 2005 and then completed three years of postdocs at both Microsoft Research in Redmond, WA and CWI in Amsterdam, Netherlands before accepting a job as an assistant professor at Northwestern University in Chicago, IL in 2008. She joined the Microsoft Research in 2012.

Market Design for Carbon Control

Atmospheric levels of carbon dioxide are at record highs, contributing to a rapidly warming climate. Many governments and private corporations are committed to taking actions to reduce these levels. In this talk, we discuss market design approaches to facilitate these goals. We first discuss contracts for afforestation in developing nations. By planting trees on farmland, farmers can sequester carbon from the atmosphere which can then be sold to firms hoping to become carbon neutral. However, these trees are costly to grow to maturity, so contract design is required to make sure the incentives are aligned. We then discuss auctions for carbon licenses. This approach is taken by governments wishing to limit carbon output of their economies. Firms buy licenses to pollute in uniform price auctions. These auctions have perverse incentives for bidders especially in settings with few large firms. We study the welfare of their equilibria and simple design improvements that provably bound the welfare loss.

Nicole starts by saying that it is no secret that atmospheric carbon has risen drastically since the 1900s. Currently, it has risen over 9000 metric tons. There are lots of evidence to show that this is indeed contributing to global warming. We have higher temperatures most years since the 80s and it is only getting worse and worse. Microsoft aims to be carbon negative by 2030. This has driven her to work in this area. She stresses on the fact that it is quite an ambitious goal since there aren’t enough carbon offsets available for Microsoft to buy.

Her research goal is to think of new market designs to make the above goal achievable. For this, she considers three pillars for reducing carbon:

  1. Emissions reduction: Plants invest in technologies to guarantee a cap on how much carbon they release.
  2. Remove carbon to biosphere: It can be done by planting trees, additionally, soil will absorb carbon from the atmosphere. The natural environment helps in this goal.
  3. Remove carbon to geosphere: It is not effective today but has a lot of promise, The idea is to insert carbon deep into the bedrock so that it can exist without entering the atmosphere.

She first discusses market approaches for the first two pillars.

Emission reduction

Regional organisations that sell off licenses to firms eg RGGI, EUETS, etc , cap number of licenses available in a given period. These licenses are acquired by emitters to cover their emissions which is valid over a certain period. The question here is how to allocate these licenses?

Allocation is done by uniform price auction. The goal is to reduce emissions in such a way that not only maximises efficiency, but also hope that the emissions bring a lot of value to the society,i.e. should have side benefits to encourage companies to engage in clean tech.

The biggest challenge in this area is to answer whether we are giving the licenses to firms that can produce the most value, i.e. the efficiency of initial allocation. The other challenge in this area is the concept of pre-market. Before a license is auctioned off, firms decide which technology to use to produce goods. Auction design has an effect on technology investment. Thirdly, post-market- when the government issues licenses, there is a possibility that firms trade amongst themselves. Again, how would the design of initial allocation affect such actions. Lastly, there is the issue of market cadence, in other words, number of licenses and how long they last for.

Nicole moves on to addressing the auction design for initial allocation. She illustrates her ideas graphically. On the X-axis, we have the number of licenses and on the Y-axis the production value of the firms. V (x) which is a single firm’s valuation is plotted and we see that it maximises at some point till it flattens out. It is concave in nature- the decreasing marginal value. She then illustrates the idea of social externalities graphically. Q(x) is the pollution externality which is plotted on the Y-axis against x, the number of licenses. Initially less carbon is released, it is not drastically bad, but it suddenly increases. This is a convex curve- the increasing marginal cost.

There is a trade-off between higher vale and higher external society.The objective here is to find the number of licenses that maximises the difference between the production value and pollution externalities.

When there are multiple firms, we ask the question of how to allocate among the various firms? The first license goes to the firm with highest marginal value and once the maximum is reached it goes to the second firm with highest marginal value.

To complicate matters in this auction, there is the presence of economic uncertainty. Private information exists, i.e., firms know their values for any number of licenses but the society and governmental agencies do not. There is information asymmetry. Firms know their production methodology and how much pollution they would emit. Governmental agencies need to elicit these values from firms to decide how to allocate these licenses.

The auction format is modified as follows. They run a uniform price auction parameterised by a fixed number of C licenses for sale. They ask each emitter to submit decreasing marginal bids. The top C marginal bids win and they pay the lowest winning bid per unit. Subject to a price floor (i.e. even if lowest bid is below this prive, the price per unit is increased to the reserve price/price floor) and also a price ceiling (i.e. lowest winning bid is higher than the price ceiling, the firms can buy additional number of licenses at price ceiling), the welfare is the difference between the firm’s valuation and he social cost of pollution. For the remainder of her talk, she will ignore price ceilings. For those interested, you can take a look at her research paper.

She illustrated this with a small example. There are two firms: red and blue. 3 licenses are to be sold. Blue firm bids ($45,$35). Red firm bids ($32,$20). The first 2 licenses go to blue, and the third license goes to red at $32 per license.

Utility of blue firm = (45 + 35) − (32 ∗ 2) = 16

If the blue firm pretends to want just one license and decides to reduce his second bid lower, say to 0, we see that the blue firm receives only one license and red gets the remaining two. The cost per license in this case is now $10. In this case,

Utility of blue firm = (45 + 0) − (10 ∗ 1) = 35.

This shows that uniform price auctions are not incentive-compatible because the red firm price sets the bid. The blue firm reduces their bid, it is interesting to see that the blue firm has a deviation, maybe to reduce the demand. Although blue firm benefits, this approach harms the society. If we start thinking of pollution externalities, it will become negative to society. Thus demand reduction strategies are harmful.

To circumvent around this problem, we find perturbations to make it robust to deviations. Transform the parameters and set price floors appropriately such that the auction is safe to manipulation.

Main result: Suppose W is the expected welfare of the welfare-optimal uniform price auction, under truth telling, then there is a uniform-price auction with expected welfare at any equilibrium.

Without α,W is optimal welfare and α is price of anarchy, but without costs, optimal welfare may exceed W.

The price floor is a linear curve with slope p. Any license that is sold has to have marginal value atleast p. If we have value curve of firms then the licenses that are valued by firms less than p will not be sold, because firm will not pay.

Price of anarchy in this setting is a constant, when there is no cost of production and no price floors. With costs and with demand reduction of firms, we can have negative welfare at equilibrium. The driving idea is to set a safe price floor, i.e.  where C is the cap on the licenses. This makes it robust to manipulation if you look at the welfare generated with truth telling. Manipulations will not decrease welfare more than a constant factor. The reason is due to a rotating (transformation viewpoint).

Nicole addresses the following question: ”Does non-strategic welfare of uniform auction with safe price give a good approximation to uniform auction with any price?” The answer is yes. She shows their main result as

max { Welfare of the best safe price auction, Optimal welfare from any 1 single agent }≥ welfare of the best unif. price auction.

Remove carbon to biopshere

There are several ways in achieving this goal but her area of interest is in achieving this via afforestation. She shows a picture of a coffee farm in Uganda, and the idea is these farms benefit a lot by planting indigenous trees. They provide natural habitats and promote biodiversity. They also act as natural pest barriers, the shade produced by them enhances farm productivity and soil.

The barriers for achieving this in developing nations includes a huge knowledge gap and cash constraint. Farmers have liquidity constraints ad poor accessibility to financial investments. Indigenous planting is very slow in generating benefits and is risky because of natural disasters.

The question to be addressed is how can we help NGOs design contracts that maximise long term survival rate of trees while minimising the total payment to farmers? Assuming perfect monitoring, the challenge is that the tree growing effort is not observed, this is a moral hazard. Additionally the cost of effort is heterogeneous, some farmers might be better than other farmers.

They model this via a Markov chain with states as the maturity level of the tree, s = 0,1,…,M. The transition is a function of unobserved effort α ∈ (0,1) and risk q which is the probability of the tree surviving. The state s s + 1 occurs when α = 1 and tree survives. The state s → 0 occurs when α = 0 or α = 1 and tree dies. Once a tree survives, there is no need for effort from the agent. The cost of effort is c F where F is i.i.d. distributed on [cl,ch]. There is a discount rate for the agents/ farmers δ, but the principal doesn’t discount. The principal is the NGO who chooses the payment schedule. He observes each tree, and pays an amount based on the age of the tree. This induces different amounts of farmer effort. The stationary distribution is endogenous based on the prices set by the principal, and this determines the carbon capture.

The objective is to minimise payment Pαipi where pi is the payment for reaching state i such that the payments are non negative and the effort in each state maximises discounted utility, i.e. agents always exert effort/ incentive constraints.

The incentive constraint is an interesting feature in the model. The discounted cost as a function of strategy is concave increasing as more effort of cost in more periods of time to be increasing. Once the tree reaches maturity, that offsets some cost and they no longer have to exert effort to keep the tree alive.

She then looks at the impact of different payment strategies. The idea is to look at discounted value of payments to farmers always exerting effort. One strategy can be to pay only when tree reaches maturity, but this requires high payments in the final stage. The other option could be to give constant payments in each state. The optimal solution shown in her research is to frontload payments as much as possible to incentivise high-cost type and then coast, or curtail late-stage payments to prevent low-cost types from restarting to accrue cash benefits.

Remove carbon to geosphere

This approach has no incentive, but it is a long term solution. The biosphere approach has a high risk and is a short term solution, so how does one set up a market that does a nice trade-off between the two?

Audience: Do these licenses work in a global market setting? What about the risks when this is implemented in Europe and the U.S.? Nicole: Typically firms push their production to China which has lower carbon controls, hence it is very important to have international agreements but most licenses are issued to localised agencies.

Audience: In the uniform auction, it seems that there is an incentive to inflate societal costs, do we assume societal costs are also truthful?

Nicole: Assuming that the government is running this auction and knows the cost to society, so we are assuming that firms can observe that and it is common knowledge, but again there are politics involved. Audience:           Have you considered that the land can be used for something else?

Nicole: You can incorporate that in the model by shifting the cost down. The tricky thing to incorporate is the value for farmers getting utility by selling the firewood when they cut the trees down. It is messy to incorporate value and cost in one transaction.

Audience: Can you comment on the market design? Most of these mechanisms are one-shot,but in situations like financial derivatives that are temporal, will it work there?

Nicole: Production is often seasonal and licenses are also seasonal, not really sure how to deal with that at the moment.

Audience: What about the negative impact of incentives design? For instance, when Microsoft implements these programs, how do you ensure that the incentives developed are properly aligned?

Nicole: Microsoft has lots of resources towards investigating accredition agencies and they wont buy just any credits. They only buy credits from agencies they have a lot of faith in. The firm believes in the geospehere solution, but personally she feels that externalities of the solution are beneficial so we should think harder how to overcome these incentives problems.

Audience: What about a setting where we don’t know who the players are, for instance, bitcoin miners, who consume lots of electricity?

Nicole: It will be very hard to enforce these licenses, you need identity for this. You could perhaps try to incentivize by looking at proof of stakes etc, but beyond that, don’t really know.

Audience: Have you considered any other type of auctions?

Nicole: This particular paper focuses on uniform price auctions, but definitely you could try other optimal auctions.

Audience: How do you translate this into the field?

Nicole: There are lots of things to know, we keep trying to discover it in countries we aim to launch in.

Matt Taddy

Vice President for Economic Technology and North America Chief Economist at Amazon.com

Matt Taddy is Vice President for Economic Technology and North America Chief Economist at Amazon.com. Previously, from 2008-2018 he was a Professor of Econometrics and Statistics at the University of Chicago Booth School of Business, where he developed their Data Science curriculum.

 He has also worked in a variety of industry positions including as a Principal Researcher at Microsoft and a research fellow at eBay, and his book “Business Data Science” was published by McGraw-Hill in 2019.