Speaker : | Maxime Mouchet |
LIP6 | |
Date: | 02/02/2022 |
Time: | 11:00 am - 12:00 pm |
Location: | Paris-Rennes Room (EIT Digital) |
Abstract
The Hidden Markov Model is a generative model in which the distribution of the observations depends on the state of a Markov chain. These models have been successfully applied to many problems, including part-of-speech tagging, handwriting recognition and time series clustering. A common issue is to infer the parameters of the HMM, given a set of observations, when the number of latent states is unknown. A traditional approach is to evaluate models of different dimensions and choose the one that offers a good compromise between the complexity of the model, and its adequacy to the data, in order to prevent over-fitting.
In this talk we will present the Infinite Hidden Markov Model, or HDP-HMM (Hierarchical Dirichlet Process Hidden Markov Model), which offers an alternative solution to this problem. The HDP-HMM generalizes the HMM model by considering the number of state itself as an unknown quantity to be inferred, through Bayesian inference. We will discuss the HDP-HMM and its variants, as well as an application to network delay time series clustering.
References
[1] Beal, M. J., Ghahramani, Z., & Rasmussen, C. E. (2002). The infinite hidden Markov model. Advances in neural information processing systems, 1, 577-584.
https://people.csail.mit.edu/jrennie/trg/papers/beal-ihmm-03.pdf
[2] Fox, E. B., Sudderth, E. B., Jordan, M. I., & Willsky, A. S. (2011). A sticky HDP-HMM with application to speaker diarization. The Annals of Applied Statistics, 1020-1056.
http://willsky.lids.mit.edu/publ_pdfs/204_pub_AAS.pdf