Table Of Contents

- Manual
- Getting Started
- Starting the Program
- Retrieving Data
- Manipulating Data
- The Variable List
- The Variable List Menu
- Filter Observations/Selecting
- Add New Variables
- Delete Variables
- Edit Metadata
- Set Replicate Weights
- New Variable Reserve
- Edit Value Labels
- Dummy Code Categorical Variable
- Collapse Categories of Categorical Variable
- Set Missing Values
- The Expression Evaluator

- Saving and Re-running Actions

- Sampling
- Procedures
- Measurement Models
- MML Models for Test Data
- Other Available Procedures

- Graphics
- Tools
- Estimation Methods
- Optimization Techniques
- Variance Estimation

- Post-hoc Procedures
- More user input instructions
- The User Interface
- Input Instructions
- Options
- Output Precision

- Glossary of Terms and Symbols

- Getting Started

Steepest Ascent

The steepest ascent optimization method is designed to find the maximum likelihood estimators (MLE) in a situation where the first derivative of the log-likelihood function is available but no good estimate is available for the starting value of the MLE. This iterative method involves taking a 'step' in the direction of the function's derivative at the given point at which it is evaluated. After a step has been taken, one re-evaluates the derivative of the log likelihood function and can either take another steepest ascent step or use the Newton-Raphson algorithm.

Because the Newton-Raphson method relies on second derivatives, its steps are not always ascending, particularly when one is using an original estimate of the parameter q that is far from the MLE. It is therefore often useful to start with a steepest ascent step, as well as to use a steepest ascent step if the Newton-Raphson method stops producing ascending steps.

The steepest ascent step has a simple principle: it chooses its direction as that of the first derivative of the log likelihood function. This is by definition the best local direction, i.e., the direction that produces the steepest ascent from the given point. However, the steepest ascent direction often changes dramatically after a short step. This method is not useful unless the steepest ascent direction remains an ascent direction for a sufficiently large step before another iteration is required.

When one has taken a steepest ascent step, one can re-evaluate the derivative of *l* and take another steepest ascent step, or start taking Newton-Raphson steps. This choice will depend on whether the current estimate is near the MLE and whether the new steepest ascent direction remains useful for a sufficient distance.

Thisted, R. A. (1988). *Elements of Statistical Computing*. New York: Chapman and Hall.

Steepest Ascent can be chosen as the desired optimization technique for MML Regression, MML Ordinal Tables, MML Nominal Tables, NAEP Tables, NALS Tables, and MML Composite Regression. To select Steepest Ascent, first select the desired procedure form those listed above. Once the dialogue box opens and you finish entering all other necessary specifications, click on the *Advanced* button. Go to the "Optimization" window and scroll to select Steepest Ascent.