Assume that there are m functions f1(β), …, fm(β) from n variables β = (β1, …, βn). It is required to minimize sum of squares:
If the initial approximation is β0, regular approximation can be found by the formula:
,
Where:
. Search direction.
. Step size. It can be found by the linear search as implicit minimum by α one-dimensional function . Here Bs is the approximation of the Hessian matrix of the function S at the point βs.
∇ f is the vector gradient of f1, …, fm function in βs point.
On the first iteration the Hessian matrix is approximated by the matrix, where J is a Jacobian of the S function in the β0 point, that is, Gauss-Newton direction is selected. Further, the approximation of the inverse Hessian matrix is updated using BFGS-formula (Broyden-Fletcher-Goldfarb-Shanno) provided that the condition is satisfied, and remains unchanged.
If on the next iteration the linear search procedure found that with any αs step values the criterion function is not reduced, the ps direction is changed to the Gauss-Newton (as on the first iteration) and step length is calculated again. If this does not help to find an acceptable step value, the algorithm stops.
Iteration method stops if one of the following conditions is met:
The relative change of the required point at the iteration does not exceed the specified precision value.
The maximum number of iterations is reached.
The iteration did not reduce the studied criterion function (sum of squares).
See also:
Library of Methods and Models | ARIMA | ARIMA Model Coefficient Estimation