Table Of Contents

- Manual
- Getting Started
- Starting the Program
- Retrieving Data
- Manipulating Data
- The Variable List
- The Variable List Menu
- Filter Observations/Selecting
- Add New Variables
- Delete Variables
- Edit Metadata
- Set Replicate Weights
- New Variable Reserve
- Edit Value Labels
- Dummy Code Categorical Variable
- Collapse Categories of Categorical Variable
- Set Missing Values
- The Expression Evaluator

- Saving and Re-running Actions

- Sampling
- Procedures
- Measurement Models
- MML Models for Test Data
- Other Available Procedures

- Graphics
- Tools
- Estimation Methods
- Optimization Techniques
- Variance Estimation

- Post-hoc Procedures
- More user input instructions
- The User Interface
- Input Instructions
- Options
- Output Precision

- Glossary of Terms and Symbols

- Getting Started

EM Algorithm

The EM algorithm is one of several optimization techniques that can be used to obtain Marginal Maximum Likelihood (MML) estimates of item parameters based on IRT models when the variable of interest (e.g., ability) is only partly observed. This algorithm consists of iterating two-phase 'steps' until the maximum likelihood estimator is found. Phase one of each iteration, known as the *E-step* (where *E* stands for expectation), is based on computing the provisional expected frequency and the provisional expected sample size. Phase two, known as the *M-step* (where *M* stands for maximization), is based on computing MML estimates of the parameter by maximizing the log-likelihood function. Each estimation process is repeated until the values become stable at a given level of precision specified by the analyst.

The EM algorithm was developed as a generalization of several different methods for estimating a parameter that is only partly observed (Dempster, Laird, and Rubin, 1977). This algorithm has become widely used because of its conceptual simplicity and ease of implementation. However, this algorithm has the disadvantages of converging slowly and generally only linearly.

The EM algorithm is an iterative one, with each iteration consisting of two steps. The first step (know as the E step), computes the expected log-likelihood function of the complete data, *l _{j}*, based on the

This two-step process is repeated until convergence, specified as a given precision criterion.

Bock, R. D., & Aitkin, M. (1982). Marginal Maximum Likelihood estimation of item parameters: application of an EM algorithm. *Psychometrika, 46,* 443-459.

Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm (with Discussion). *Journal of the Royal Statistical Society, 1977 Series B, 39,* 1-38.

Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. *Applied Psychological Measurement, 16,* 159-176.

EM algorithm can be chosen as the desired optimization technique for MML Regression, MML Ordinal Tables, MML Nominal Tables, NAEP Tables, NALS Tables, and MML Composite Regression. To select EM algorithm, first select the desired procedure form those listed above. Once the dialogue box opens and you finish entering all other necessary specifications, click on the *Advanced* button. Go to the "Optimization" window and scroll to select EM Algorithm.