In hidden Markov chains, the system's behavior depends on latent (or hidden) variables. This has a lot of applications in contemporary AI. For now, focus on grasping the high-level themes and ideas. If the subject interests you, you can dive deeper into technical details. The examples are particularly instructive.
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or "hidden") Markov process (referred to as ). An HMM requires that there be an observable process whose outcomes depend on the outcomes of in a known way. Since cannot be observed directly, the goal is to learn about state of by observing . By definition of being a Markov model, an HMM has an additional requirement that the outcome of at time must be "influenced" exclusively by the outcome of at and that the outcomes of and at must be conditionally independent of at given at time . Estimation of the parameters in an HMM can be performed using maximum likelihood. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate the parameters.
Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition - such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
Source: Wikipedia, https://en.wikipedia.org/wiki/Hidden_Markov_model
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.