Wednesday, October 26, 2005

Maximum Likelihood

Let X=( $ X_1,\ldots,X_n$) be a random vector and

$\displaystyle \lbrace f_{\mathbf{X}}(\boldsymbol{x}\mid\boldsymbol{\theta}) : \boldsymbol{\theta} \in \Theta \rbrace$
a statistical model parametrized by $ \boldsymbol{\theta}=(\theta_1,\ldots,\theta_k)$, the parameter vector in the parameter space $ \Theta$. The likelihood function is a map $ L: \Theta \rightarrow [0,1]\subset \mathbb{R}$ given by
$\displaystyle L(\boldsymbol{\theta}\mid\boldsymbol{x}) = f_{\mathbf{X}}(\boldsymbol{x}\mid\boldsymbol{\theta}).$
In other words, the likelikhood function is functionally the same in form as a probability density function. However, the emphasis is changed from the $ \boldsymbol{x}$ to the $ \boldsymbol{\theta}$. The pdf is a function of the $ x$'s while holding the parameters $ \theta$'s constant, $ L$ is a function of the parameters $ \theta$'s, while holding the $ x$'s constant.

When there is no confusion, $ L(\boldsymbol{\theta}\mid\boldsymbol{x})$ is abbreviated to be $ L(\boldsymbol{\theta})$.

The parameter vector $ \hat{\boldsymbol{\theta}}$ such that $ L(\hat{\boldsymbol{\theta}})\geq L(\boldsymbol{\theta})$ for all $ \boldsymbol{\theta}\in\Theta$ is called a maximum likelihood estimate, or MLE, of $ \boldsymbol{\theta}$.

No comments: