|10:00 am - 1:00 pm
|LINCS Seminars room
The recent growth of the collaborative economy with peer-to-peer networks created the need for interfaces that put in relation different types of populations. Applications such as ride sharing, house renting, job search, dating websites and so on, link people together based on various preferences or compatibilities. This motivates the development of new algorithms that optimize those associations, also called matchings. Matching models have been studied at great length in the literature when parts of the population is static. However, when customers arrive randomly over time, the problem is much more challenging and many questions remain open. This thesis tackles the matter through the lens of control in stochastic matching models.
These models are formalized as follows. Items of different classes arrive to the system according to a given probability distribution. Upon arrival, each item is matched with another compatible item according to a matching policy and both items leave the system immediately. If there are no compatible items, the new arrivals join the queues of unmatched items of the same class. Compatibilities between item classes are defined by a connected graph, where nodes represent the classes of items and the edges the compatibilities between item classes. Unmatched items induce a holding cost at each time step and we model this problem as a Markov Decision Process.
In particular, we study the Bipartite Matching Model which considers bipartite compatibility graphs and arrivals by pair. We are interested in finding the optimal control for the discounted and average cost problems. For the N-shaped compatibility graph, we fully characterize the optimal matching policy. We also identify optimal matchings for more general bipartite graphs under additional assumptions on the costs.
We also analyze the performance of the General Matching Model, where new items arrive one by one with a non-bipartite compatibility graph, under the First Come First Matched policy. We show that such a model may exhibit a non intuitive behavior: increasing the matching flexibility by adding new edges in the compatibility graph may lead, when the system is close to the stability condition, to a larger average population at the steady state.
Finally, we use Reinforcement Learning to find the optimal threshold-type policy for Bipartite Matching models. We develop an Actor-Critic algorithm composed of a Temporal Difference critic and a Policy Gradient actor that includes this threshold type structure and learns the optimal threshold within this set of policies.
Keywords: Stochastic matching models, Markov decision processes, Optimal control, Braess paradox, Reinforcement learning
Here the link to attend: https://global.gotomeeting.com/join/212030933