Maximum Likelihood Estimation

Maximum likelihood method is a method of estimation of unknown parameter by customization of the likelihood function.

Consider a sequence of random variables {y1y2,…}, not necessarily or identically distributed. Assume that hn (·,θ0) is combined distribution density of random variables = (y1,…,yn). Suppose that the view of this function is known except for the parameter θ0, which should be estimated. It is supposed that θ0 ∈ Θ, where a range of possible values of the Θ parameter belongs to the finite Euclidean space.

For each fixed y real function

Ln (θ) = Ln (θ,y) = hn (y,θ), θ ∈ Θ,

is named likelihood function, and its logarithm ln L(θ) is named log-likelihood function.

For fixed y any value of the , is that:

is named maximum likelihood estimation of the θ0 parameter. Generally, there is no guarantee that the estimation of the θ0 parameter exists for all values of y, but if it is true, the function is named the maximum likelihood estimation of θ0 unknown parameter.

When the maximum is reached in the internal set of the parameters space Θ, and Ln (θ) is a function differentiated by θ, the vector of partial derivatives  ln Ln (θ) / θ at this point equals to zero. Thus, the estimation is a solution to the vector equation:

Practically, to find maximum likelihood estimation the following algorithm should be followed:

  1. Create the likelihood function Ln ) based on the known density function hn (y0) of the distribution of random variable y.

  2. Take logarithm of obtained on the first step function Ln (θ).

  3. Find the first derivative ? ln Ln(?) / ?? and set it to zero. Solution of the output vector equation is given by the estimation vector .

See also:

Library of Methods and Models | IMLESettings