A Markov Chain is a series of random variables such that the next value depends on only the previous one, or $P({X_{t + 1}}|{X_t},{X_{t - 1,}},...,{X_1}) = P({X_{t + 1}}|{X_t})$
Given a starting value (or state) the chain will converge to stationary distribution $\psi$ after some burn-in sample m. We discard the first m samples and use the remaining n-m samples to get an estimate of the expectation as follows.
\[E[f(x)] \approx {\textstyle{1 \over {n - m}}}\sum\limits_{t = m + 1}^n {f({X_t})} \]
We need the ability to construct chains that its stationary distribution is what we are interested in. It is called the target distribution $\pi (x)$.
In short, how to get the next stage (sample, random number) from this stage?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment