Maximum likelihood method is a method of estimation of unknown parameter by customization of the likelihood function.
Consider a sequence of random variables {y1, y2,…}, not necessarily or identically distributed. Assume that hn (·,θ0) is combined distribution density of random variables y = (y1,…,yn). Suppose that the view of this function is known except for the parameter θ0, which should be estimated. It is supposed that θ0 ∈ Θ, where a range of possible values of the Θ parameter belongs to the finite Euclidean space.
For each fixed y real function
Ln (θ) = Ln (θ,y) = hn (y,θ), θ ∈ Θ,
is named likelihood function, and its logarithm ln Ln (θ) is named log-likelihood function.
For fixed y any value of the , is that:
is named maximum likelihood estimation of the θ0 parameter. Generally, there is no guarantee that the estimation of the θ0 parameter exists for all values of y, but if it is true, the function is named the maximum likelihood estimation of θ0 unknown parameter.
When the maximum is reached in the internal set of the parameters space Θ, and Ln (θ) is a function differentiated by θ, the vector of partial derivatives ∂ ln Ln (θ) / ∂θ at this point equals to zero. Thus, the estimation is a solution to the vector equation:
Practically, to find maximum likelihood estimation the following algorithm should be followed:
Create the likelihood function Ln (θ) based on the known density function hn (y,θ0) of the distribution of random variable y.
Take logarithm of obtained on the first step function Ln (θ).
Find the first derivative ? ln Ln(?) / ?? and set it to zero. Solution of the output vector equation is given by the estimation vector .
See also: