The steepest ascent optimization method is designed to find the maximum likelihood estimators (MLE) in a situation where the first derivative of the log-likelihood function is available but no good estimate is available for the starting value of the MLE. This iterative method involves taking a 'step' in the direction of the function's derivative at the given point at which it is evaluated. After a step has been taken, one re-evaluates the derivative of the log likelihood function and can either take another steepest ascent step or use the Newton-Raphson algorithm.
Because the Newton-Raphson method relies on second derivatives, its steps are not always ascending, particularly when one is using an original estimate of the parameter q that is far from the MLE. It is therefore often useful to start with a steepest ascent step, as well as to use a steepest ascent step if the Newton-Raphson method stops producing ascending steps.
The steepest ascent step has a simple principle: it chooses its direction as that of the first derivative of the log likelihood function. This is by definition the best local direction, i.e., the direction that produces the steepest ascent from the given point. However, the steepest ascent direction often changes dramatically after a short step. This method is not useful unless the steepest ascent direction remains an ascent direction for a sufficiently large step before another iteration is required.
When one has taken a steepest ascent step, one can re-evaluate the derivative of l and take another steepest ascent step, or start taking Newton-Raphson steps. This choice will depend on whether the current estimate is near the MLE and whether the new steepest ascent direction remains useful for a sufficient distance.
Thisted, R. A. (1988). Elements of Statistical Computing. New York: Chapman and Hall.