Table Of Contents

- Manual
- Getting Started
- Starting the Program
- Retrieving Data
- Manipulating Data
- The Variable List
- The Variable List Menu
- Filter Observations/Selecting
- Add New Variables
- Delete Variables
- Edit Metadata
- Set Replicate Weights
- New Variable Reserve
- Edit Value Labels
- Dummy Code Categorical Variable
- Collapse Categories of Categorical Variable
- Set Missing Values
- The Expression Evaluator

- Saving and Re-running Actions

- Sampling
- Procedures
- Measurement Models
- MML Models for Test Data
- Other Available Procedures

- Graphics
- Tools
- Estimation Methods
- Optimization Techniques
- Variance Estimation

- Post-hoc Procedures
- More user input instructions
- The User Interface
- Input Instructions
- Options
- Output Precision

- Glossary of Terms and Symbols

- Getting Started

Newton-Raphson

In a situation where the first derivative of the log likelihood function is available, an efficient method of finding the maximum of the pseudo log-likelihood equation is one developed by Bernt, Hall, Hall and Hausman [BHHH] (1974) based on the Newton-Raphson algorithm. This method is based on starting from some easy but imprecise estimate of the Maximum Likelihood estimators, then taking a succession of 'steps' based on the value of the first derivatives until a solution is found.

The Newton-Raphson algorithm, also known as Newton's method, is an iterative process for finding relative extremes of non-linear functions. Using the Newton-Raphson algorithm requires one to have both the analytic first and second derivatives of the log-likelihood function with respect to 2. Given a set of start values, **q**_{0}, we move one step closer towards the maximum likelihood (ML) solution by adding the product of the first and second derivatives of log likelihood function, *l*, to the current point:

Where S(**q**) is the vector of first derivatives with respect to the parameters, known as the *score vector*, and H(**q**) is the matrix of second derivatives, known as the *Hessian matrix*. While Newton's method has many desirable properties in a range of maximum likelihood problems, it only works when the matrix of second derivatives is positive definite. Unfortunately, this is not always the case, especially in early iterations. The BHHH method substitutes the sum of each term of the score vector multiplied by itself, i.e.,

for the negative Hessian matrix, thus eliminating the need for second derivatives and guaranteeing a positive definite matrix everywhere. The above process is repeated in successive steps until one of three quantities is sufficiently small: 1) The difference between the parameter estimates, i.e., | **q**_{j} - **q**_{j+1}| ,2) the difference between the likelihood estimates, i.e., *l*_{j+1} - *l _{j}*, and 3) the score vector S(

Berndt, E. K., Hall, B. H., Hall, R. E., & Hausman, J. A. (1974). Estimation and inference in nonlinear structural models. *Annals of Economic and Social Measurement, 3/4,* 653-665.

Newton-Raphson can be chosen as the desired optimization technique for MML Regression, MML Ordinal Tables, MML Nominal Tables, NAEP Tables, NALS Tables, and MML Composite Regression. To select Newton-Raphson, first select the desired procedure form those listed above. Once the dialogue box opens and you finish entering all other necessary specifications, click on the *Advanced* button. Go to the "Optimization" window and scroll to select Newton-Raphson.