Before I said anything about the title, recall Mr. Baye's rule
\[p(\theta |y) = \frac{{p(y|\theta )}p(\theta)}{{p(y)}}\]
The denominator $p(y)$ is called marginal , and is treated as a constant, therefore
\[p(\theta |y) \propto p(y|\theta )p(\theta )\]
The $p(\theta)$ is the prior. The $p(y|\theta)$, which should be familiar by now, is the ... likelihood, and $p(\theta|y)$ is the posterior.
To get the posterior, we must calculate the product of the likelihood and the prior.
Now, we might collect some data and calculate the $\theta$ for the likelihood. What if the likelihood is of a super complex model with ugly parameters? You will end up with the even uglier posterior.
Instead of doing that, we do the trick. Look for a way to choose $p(\theta)$ so that $p(\theta|y)$ has the same function form as $p(\theta)$. That is the idea of Conjugate Prior.
For example (the not so ugly likelihood), If your likelihood is Poisson
\[p(y|\theta ) = \frac{{{e^{ - \theta }}{\theta ^y}}}{{y!}}\]
The nice conjugate prior is Gamma, or $\theta \sim G(\alpha ,\beta )$
\[p(\theta ) = \frac{{{\theta ^{\alpha - 1}}{e^{ - \theta /\beta }}}}{{\Gamma (\alpha ){\beta ^\alpha }}}\]
Therefore,
\[p(\theta |y) \propto \left( {{e^{ - \theta }}{\theta ^y}} \right)({\theta ^{\alpha - 1)}}{e^{ - \theta /\beta }})\]
\[ = {\theta ^{y + \alpha - 1}}{e^{ - \theta (1 + 1/\beta )}}\]
The formula for the posterior is essentially the same as the prior with some rescaling, in fact
\[\theta |y \sim G(y + \alpha ,\frac{1}{{1 + 1/\beta }})\]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment