14 - EM Algorithms
Published online by Cambridge University Press: 05 June 2012
Summary
Introduction
In Chapter 8, we discussed methods for maximizing the log-likelihood (LL) function. As models become more complex, maximization by these methods becomes more difficult. Several issues contribute to the difficulty. First, greater flexibility and realism in a model are usually attained by increasing the number of parameters. However, the procedures in Chapter 8 require that the gradient be calculated with respect to each parameter, which becomes increasingly time consuming as the number of parameters rises. The Hessian, or approximate Hessian, must be calculated and inverted; with a large number of parameters, the inversion can be numerically difficult. Also, as the number of parameters grows, the search for the maximizing values is over a larger-dimensioned space, such that locating the maximum requires more iterations. In short, each iteration takes longer and more iterations are required.
Second, the LL function for simple models is often approximately quadratic, such that the procedures in Chapter 8 operate effectively. As the model becomes more complex, however, the LL function usually becomes less like a quadratic, at least in some regions of the parameter space. This issue can manifest itself in two ways. The iterative procedure can get “stuck” in the nonquadratic areas of the LL function, taking tiny steps without much improvement in the LL. Or the procedure can repeatedly “bounce over” the maximum, taking large steps in each iteration but without being able to locate the maximum.
- Type
- Chapter
- Information
- Discrete Choice Methods with Simulation , pp. 347 - 370Publisher: Cambridge University PressPrint publication year: 2009