|
The steepest ascent optimization method is designed to find the maximum likelihood estimators (MLE) in a situation where the first derivative of the log-likelihood function is available but no good estimate is available for the starting value of the MLE. This iterative method involves taking a ‘step’ in the direction of the function’s derivative at the given point at which it is evaluated. After a step has been taken, one re-evaluates the derivative of the log likelihood function and can either take another steepest ascent step or use the Newton-Raphson algorithm.
|