Expectation Maximization (EM)
Expectation Maximization (EM)
Expectation Maximization (EM)
- is an iterative method to find maximum likelihood estimate (MLE) or maximum a posteriori (MAP) estimates of parameters 𝜃 in statistical models, where the model depends on unobserved latent/hidden variables
- used in clustering
EM Algorithm
INITIALIZATION STEP: initialize the parameters 𝜃 to any value(s)
EXPECTATION STEP: for each partially observed sample tuple, compute all possible permutations. then for each permutation compute its weight based on the values of parameters 𝜃 (this results in a bigger data-set and is weighted)
MAXIMIZATION STEP: estimate new values for parameters 𝜃 relative to the completed data from Step 2. estimation done either:
- Maximum Likelihood Estimate (MLE) - maximize on the likelihood function
- Maximum a Posterior Estimate (MAP) - maximize on the posterior
- based:
- 𝐏(𝜃|𝐷) ∝ 𝐏(𝐷|𝜃)·𝐏(𝜃)
- posterior ∝ likelihood·prior
- repeat from Step 2 until convergence or max iterations
EM TODO
- EM In Practice - https://www.coursera.org/lecture/probabilistic-graphical-models-3-learning/em-in-practice-VAI6r
- EM Latent Variables - https://www.coursera.org/lecture/probabilistic-graphical-models-3-learning/latent-variables-iNq9y
Subpages
, multiple selections available,