Handbook of Labor Economics, Vol. 4, No. Suppl PA, 2011

ISSN: 1573-4463

doi: 10.1016/S0169-7218(11)00410-2

Chapter 4The Structural Estimation of Behavioral Models: Discrete Choice Dynamic Programming Methods and Applications

Michael P. Keane*, Petra E. Todd**, Kenneth I. Wolpin**


* University of Technology, Sydney and Arizona State University

** University of Pennsylvania

Abstract

The purpose of this chapter is twofold: (1) to provide an accessible introduction to the methods of structural estimation of discrete choice dynamic programming (DCDP) models and (2) to survey the contributions of applications of these methods to substantive and policy issues in labor economics. The first part of the chapter describes solution and estimation methods for DCDP models using, for expository purposes, a prototypical female labor force participation model. The next part reviews the contribution of the DCDP approach to three leading areas in labor economics: labor supply, job search and human capital. The final section discusses approaches to validating DCDP models.

JEL classification

• J • C51 • C52 • C54

Keywords

• Structural estimation • Discrete choice • Dynamic programming • Labor supply • Job search • Human capital

1 Introduction

The purpose of this chapter is twofold: (1) to provide an accessible introduction to the methods of structural estimation of discrete choice dynamic programming (DCDP) models and (2) to survey the contributions of applications of these methods to substantive and policy issues in labor economics.1 The development of estimation methods for DCDP models over the last 25 years has opened up new frontiers for empirical research in labor economics as well as other areas such as industrial organization, economic demography, health economics, development economics and political economy.2 Reflecting the generality of the methodology, the first DCDP papers, associated with independent contributions by Gotz and McCall (1984), Miller (1984), Pakes (1986), Rust (1987) and Wolpin (1984), addressed a variety of topics, foreshadowing the diverse applications to come in labor economics and other fields. Gotz and McCall considered the sequential decision to re-enlist in the military, Miller the decision to change occupations, Pakes the decision to renew a patent, Rust the decision to replace a bus engine and Wolpin the decision to have a child.

The first part of this chapter provides an introduction to the solution and estimation methods for DCDP models. We begin by placing the method within the general latent variable framework of discrete choice analysis. This general framework nests static and dynamic models and nonstructural and structural estimation approaches. Our discussion of DCDP models starts by considering an agent making a binary choice. For concreteness, and for simplicity, we take as a working example the unitary model of a married couple’s decision about the woman’s labor force participation. To fix ideas, we use the static model with partial wage observability, that is, when wage offers are observed only for women who are employed, to draw the connection between theory, data and estimation approaches. In that context, we delineate several goals of estimation, for example, testing theory or evaluating counterfactuals, and discuss the ability of alternative estimation approaches, encompassing those that are parametric or nonparametric and structural or nonstructural, to achieve those goals. We show how identification issues relate to what one can learn from estimation.

The discussion of the static model sets the stage for dynamics, which we introduce again, for expository purposes, within the labor force participation example by incorporating a wage return to work experience (learning by doing).3 A comparison of the empirical structure of the static and dynamic models reveals that the dynamic model is, in an important sense, a static model in disguise. In particular, the essential element in the estimation of both the static and dynamic model is the calculation of a latent variable representing the difference in payoffs associated with the two alternatives (in the binary case) that may be chosen. In the static model, the latent variable is the difference in alternative-specific utilities. In the case of the dynamic model, the latent variable is the difference in alternative-specific value functions (expected discounted values of payoffs). The only essential difference between the static and dynamic cases is that alternative-specific utilities are more easily calculated than alternative-specific value functions, which require solving a dynamic programming problem. In both cases, computational considerations play a role in the choice of functional forms and distributional assumptions.

There are a number of modeling choices in all discrete choice analyses, although some are more important in the dynamic context because of computational issues. Modeling choices include the number of alternatives, the size of the state space, the error structure and distributional assumptions and the functional forms for the structural relationships. In addition, in the dynamic case, one must make an assumption about how expectations are formed.4 To illustrate the DCDP methodology, the labor force participation model assumes additive, normally distributed, iid over time errors for preferences and wage offers. We first discuss the role of exclusion restrictions in identification, and work through the solution and estimation procedure. We then show how a computational simplification can be achieved by assuming errors to be independent type 1 extreme value (Rust, 1987) and describe the model assumptions that are consistent with adopting that simplification. Although temporal independence of the unobservables is often assumed, the DCDP methodology does not require it. We show how the solution and estimation of DCDP models is modified to allow for permanent unobserved heterogeneity and for serially correlated errors. In the illustrative model, the state space was chosen to be of a small finite dimension. We then describe the practical problem that arises in implementing the DCDP methodology as the state space expands, the well-known curse of dimensionality (Bellman, 1957), and describe suggested practical solutions found in the literature including discretization, approximation and randomization.

To illustrate the DCDP framework in a multinomial choice setting, we extend the labor force participation model to allow for a fertility decision at each period and for several levels of work intensity. In that context, we also consider the implications of introducing nonadditive errors (that arise naturally within the structure of models that fully specify payoffs and constraints) and general functional forms. It is a truism that any dynamic optimization model that can be (numerically) solved can be estimated.

Throughout the presentation, the estimation approach is assumed to be maximum likelihood or, as is often the case when there are many alternatives, simulated maximum likelihood. However, with simulated data from the solution to the dynamic programming problem, other methods, such as minimum distance estimation, are also available. We do not discuss those methods because, except for solving the dynamic programming model, their application is standard. Among the more recent developments in the DCDP literature is a Bayesian approach to the solution and estimation of DCDP models. Although the method has the potential to reduce the computational burden associated with DCDP models, it has not yet found wide application. We briefly outline the approach. All of these estimation methods require that the dynamic programming problem be fully solved (numerically). We complete the methodology section with a brief discussion of a method that does not require solving the full dynamic programming problem (Hotz and Miller, 1993).

Applications of the DCDP approach within labor economics have spanned most major areas of research. We discuss the contributions of DCDP applications in three main areas: (i) labor supply, (ii) job search and (iii) schooling and career choices. Although the boundaries among these areas are not always clear and these areas do not exhaust all of the applications of the method in labor economics, they form a reasonably coherent taxonomy within which to demonstrate key empirical contributions of the approach.5 In each area, we show how the DCDP applications build on the theoretical insights and empirical findings in the prior literature. We highlight the findings of the DCDP literature, particularly those that involve counterfactual scenarios or policy experiments.

The ambitiousness of the research agenda that the DCDP approach can accommodate is a major strength. This strength is purchased at a cost. To be able to perform counterfactual analyses, DCDP models must rely on extra-theoretic modeling choices, including functional form and distributional assumptions. Although the DCDP approach falls short of an assumption-free ideal, as do all other empirical approaches, it is useful to ask whether there exists convincing evidence about the credibility of these exercises. In reviewing the DCDP applications, we pay careful attention to the model validation exercises that were performed. The final section of the chapter addresses the overall issue of model credibility.

2 The Latent Variable Framework for Discrete Choice Problems

The development of the DCDP empirical framework was a straightforward and natural extension of the static discrete choice framework. The common structure they share is based on the latent variable specification, the building block for all economic models of discrete choice. To illustrate the general features of the latent variable specification, consider a binary choice model in which an economic agent with imperfect foresight, denoted by image, makes a choice at each discrete period image, from image, between two alternatives image. In the labor economics context, examples might be the choice of whether to accept a job offer or remain unemployed or whether to attend college or enter the labor force. The outcome is determined by whether a latent variable, image, reflecting the difference in the (expected) payoffs of the image and image alternatives, crosses a scalar threshold value, which, without loss of generality, is taken to be zero. The preferred alternative is the one with the largest payoff, i.e., where image if image and image otherwise.

In its most general form, the latent variable may be a function of three types of variables: image, a vector of the history of past choices image, image, a vector of contemporaneous and lagged values of image additional variables (image; image) that enter the decision problem, and image, a vector of contemporaneous and lagged unobservables that also enter the decision problem.6 The agent’s decision rule at each age is given by whether the latent variable crosses the threshold, that is,


image     (1)


All empirical binary choice models, dynamic or static, are special cases of this formulation. The underlying behavioral model that generated the latent variable is dynamic if agents are forward looking and either image contains past choices, image, or unobservables, image, that are serially correlated. 7 The underlying model is static (i) if agents are myopic or (ii) if agents are forward looking and there is no link among the past, current and future periods through image or serially correlated unobservables.

Researchers may have a number of different, though not necessarily mutually exclusive, goals. They include:

1. Test a prediction of the theory, that is, how an observable variable in image affects image.
2. Determine the affect of a change in image or image on choices (either within or outside of the sample variation).
3. Determine the affect of a change in something not in image or image on choices, that is, in something that does not vary in the sample.

It is assumed that these statements are ceteris paribus, not only in the sense of conditioning on the other observables, but also in conditioning on the unobservables and their joint conditional (on observables) distribution.8 Different empirical strategies, for example, structural or nonstructural, may be better suited for some of these goals than for others.

3 The Common Empirical Structure of Static and Dynamic Discrete Choice Models

In drawing out the connection between the structure of static and dynamic discrete choice models, it is instructive to consider an explicit example. We take as the prime motivating example one of the oldest and most studied topics in labor economics, the labor force participation of married women. 9 We first illustrate the connection between research goals and empirical strategies in a static framework and then modify the model to allow for dynamics.

3.1 Married woman’s labor force participation

3.1.1 Static model

Consider the following static model of the labor force participation decision of a married woman. Assume a unitary model in which the couple’s utility is given by


image     (2)


where image is household image’s consumption at period image if the wife works and is equal to zero otherwise, image is the number of young children in the household, and image are other observable factors and image unobservable factors that affect the couple’s valuation of the wife’s leisure (or home production). In this context, image corresponds to the couple’s duration of marriage. The utility function has the usual properties: image.

The wife receives a wage offer of image in each period image and the husband, who is assumed to work each period, generates income image. If the wife works, the household incurs a per-child child-care cost, image, which is assumed to be time-invariant and the same for all households. 10 The household budget constraint is thus


image     (3)


Wage offers are not generally observed for nonworkers. It is, thus, necessary to specify a wage offer function to carry out estimation. Let wage offers be generated by


image     (4)


where image are observable and image unobservable factors. image would conventionally contain educational attainment and “potential” work experience (age − education − 6). Unobservable factors that enter the couple’s utility function image and unobservable factors that influence the woman’s wage offer image are assumed to be mutually serially uncorrelated and to have joint distribution image.

Substituting (3) into (2) using (4) yields


image     (5)


from which we get alternative-specific utilities, image if the wife works and image if she does not, namely


image     (6)


The latent variable function, the difference in utilities, image, is thus given by


image     (7)


The participation decision is determined by the sign of the latent variable: image if image otherwise.

It is useful to distinguish the household’s state space, image, consisting of all of the determinants of the household’s decision, that is, image, from the part of the state space observable to the researcher, image, that is, consisting only of image. Now, define image to be the set of values of the unobservables that enter the utility and wage functions that induces a couple with a given observable state space (image) to choose image. Then, the probability of choosing image, conditional on image, is given by


image     (8)


where image.

As is clear from (8), image is a composite of three elements of the model: image. These elements comprise the structure of the participation model. Structural estimation (S) is concerned with recovering some or all of the structural elements of the model. Nonstructural (NS) estimation is concerned with recovering image. In principal, each of these estimation approaches can adopt auxiliary assumptions in terms of parametric (P) forms for some or all of the structural elements or for image or be nonparametric (NP). Thus, there are four possible approaches to estimation: NP-NS, P-NS, NP-S and P-S. 11

We now turn to a discussion about the usefulness of each of these approaches for achieving the three research goals mentioned above. The first research goal, testing the theory, requires that there be at least one testable implication of the model. From (6) and the properties of the utility function, it is clear that an increase in the wage offer increases the utility of working, but has no effect on the utility of not working. Thus, the probability of working for any given agent must be increasing in the wage offer. The second goal, to determine the impact of changing any of the state variables in the model on an individual’s participation probability, requires taking the derivative of the participation probability with respect to the state variable of interest. The third goal requires taking the derivative of the participation probability with respect to something that does not vary in the data. That role is played by the unknown child care cost parameter, image. Determining its impact would provide a quantitative assessment of the effect of a child care subsidy on a married woman’s labor force participation.12

Given the structure of the model, to achieve any of these goals, regardless of the estimation approach, it is necessary to adopt an assumption of independence between the unobservable factors affecting preferences and wage offers and the observable factors. Absent such an assumption, variation in the observables, image, either among individuals or over time for a given individual, would cause participation to differ both because of their effect on preferences and/or wage offers and because of their relationship to the unobserved determinants of preferences and/or wage offers through image. In what follows, we adopt the assumption of full independence, that is, image, so as not to unduly complicate the discussion.

Nonparametric, nonstructural

If we make no further assumptions, we can estimate image nonparametrically.

Goal 1: To accomplish the first goal, we need to be able to vary the wage offer independently of other variables that affect participation. To do that, there must be an exclusion restriction, in particular, a variable in image that is not in image. Moreover, determining the sign of the effect of a wage increase on the participation probability requires knowing the sign of the effect of the variable in image (not in image) on the wage. Of course, if we observed all wage offers, the wage would enter into the latent variable rather than the wage determinants (image and image) and the prediction of the theory could be tested directly without an exclusion restriction.

What is the value of such an exercise? Assume that the observation set is large enough that sampling error can be safely ignored and consider the case where all wage offers are observed. Suppose one finds, after nonparametric estimation of the participation probability function, that there is some “small” range of wages over which the probability of participation is declining as the wage increases. Thus, the theory is rejected by the data. Now, suppose we wanted to use the estimated participation probability function to assess the impact of a proportional wage tax on participation. This is easily accomplished by comparing the sample participation probability in the data with the participation probability that comes about by reducing each individual’s wage by the tax. Given that the theory is rejected, should we use the participation probability function for this purpose? Should our answer depend on how large is the range of wages over which the violation occurs? Should we add more image variables and retest the model? And, if the model is not rejected after adding those variables, should we then feel comfortable in using it for the tax experiment? If there are no ready answers to these questions in so simple a model, as we believe is the case, then how should we approach them in contexts where the model’s predictions are not so transparent and therefore for practical purposes untestable, as is normally the case in DCDP models? Are there other ways to validate models? We leave these as open questions for now, but return to them in the concluding section of the chapter.

Goal 2: Clearly, it is possible, given an estimate of image, to determine the effect on participation of a change in any of the variables within the range of the data. However, one cannot predict the effect of a change in a variable that falls outside of the range of the data.

Goal 3: It is not possible to separately identify image and image. To see that note that because it is image that enters image; knowledge of image does not allow one to separately identify image and image. We thus cannot perform the child care subsidy policy experiment.

Parametric, Nonstructural

In this approach, one chooses a functional form for image. For example, one might choose a cumulative standard normal function in which the variables in image enter as a single index.

Goal 1: As in the NP-NS approach, because of the partial observability of wage offers, testing the model’s prediction still requires an exclusion restriction, that is, a variable in image that is not in image.

Goal 2: It is possible, given an estimate of image, to determine the effect on participation of a change in any of the variables not only within, but also outside, the range of the data.

Goal 3: As in the NP-NS approach, it is not possible to separately identify image from variation in image because image enters image.

Nonparametric, Structural

In this approach, one would attempt to separately identify image from (8) without imposing auxiliary assumption about those functions. This is clearly infeasible when wages are only observed for those who work. 13

Parametric, Structural

Although given our taxonomy, there are many possible variations on which functions to impose parametric assumptions, it is too far removed from the aims of this chapter to work through those possibilities.14 We consider only the case in which all of the structural elements are parametric. Specifically, the structural elements are specified as follows:

image     (9)

image     (10)

image     (11)

image     (12)

where image.15 This specification of the model leads to a latent variable function, the difference in utilities, image, given by


image     (13)


where image and image now consists of image, image and image.16

The likelihood function, incorporating the wage information for those women who work, is


image     (14)


The parameters to be estimated include image, image, image, image, image, and image.17 First, it is not possible to separately identify the child care cost, image, from the effect of children on the utility of not working, image; only image is potentially identified. Joint normality is sufficient to identify the wage parameters, image and image, as well as image(Heckman, 1979). The data on work choices identify image and image. To identify image, note that there are three possible types of variables that appear in the likelihood function, variables that appear only in image, that is, only in the wage function, variables that appear only in image, that is, only in the value of leisure function, and variables that appear in both image and image. Having identified the parameters of the wage function (the image’s), the identification of image (and thus also image) requires the existence of at least one variable of the first type, that is, a variable that appears only in the wage equation.18

Goal 1: As in the NS approaches, there must be an exclusion restriction, in particular, a variable in image that is not in image.

Goal 2: It is possible to determine the effect on participation of a change in any of the variables within and outside of the range of the data.

Goal 3: As noted, it is possible to identify image. Suppose then that a policy maker is considering implementing a child care subsidy program, where none had previously existed, in which the couple is provided a subsidy of image dollars if the wife works when there is a young child in the household. The policy maker would want to know the impact of the program on the labor supply of women and the program’s budgetary implications. With such a program, the couple’s budget constraint under the child care subsidy program is


image     (15)


where image is the net (of subsidy) cost of child care. With the subsidy, the probability that the woman works is


image     (16)


where image is the standard normal cumulative. Given identification of image from maximizing the likelihood (14), to predict the effect of the policy on participation, that is, the difference in the participation probability when image is positive and when image is zero, it is necessary, as seen in (16), to have identified image. Government outlays on the program would be equal to the subsidy amount times the number of women with young children who work under the subsidy.

It is important to note that the policy effect is estimated without direct policy variation, i.e., we did not need to observe households in both states of the world, with and without the subsidy program. What was critical for identification was (exogenous) variation in the wage (independent of preferences). Wage variation is important in estimating the policy effect because, in the model, the child care cost is a tax on working that is isomorphic to a tax on the wage. Wage variation, independent of preferences, provides policy-relevant variation.

To summarize, testing the prediction that participation rises with the wage offer requires an exclusion restriction regardless of the approach. This requirement arises because of the non-observability of wage offers for those that choose not to work.19 With regard to the second goal, the parametric approach allows extrapolation outside of the sample range of the variables whereas nonparametric approaches do not. Finally, subject to identification, the P-S approach enables the researcher to perform counterfactual exercises, subsidizing the price of child care in the example, even in the absence of variation in the child care price.20

3.1.2 Dynamic model

In the previously specified static model, there was no connection between the current participation decision and future utility. One way, among many, to introduce dynamic considerations is through human capital accumulation on the job. In particular, suppose that the woman’s wage increases with actual work experience, image, as skills are acquired through learning by doing. To capture that, rewrite (11) as


image     (17)


where image is work experience at the start of period image. Given this specification, working in any period increases all future wage offers. Work experience, image, evolves according to


image     (18)


where image.21 Thus, at any period image, the woman may have accumulated up to image periods of work experience. We will be more specific about the evolution of the other state space elements when we work through the solution method below. For now, we assume only that their evolution is non-stochastic.

Normally distributed additive shocks

As in the static model, and again for presentation purposes, we assume that the preference shock image and the wife’s wage shock (image) are distributed joint normal. In addition, we assume that they are mutually serially independent and independent of observables, that is, image.

Assume, in this dynamic context, that the couple maximizes the expected present discounted value of remaining lifetime utility at each period starting from an initial period, image, and ending at period image, the assumed terminal decision period.2223 Letting image be the maximum expected present discounted value of remaining lifetime utility at image given the state space and discount factor image,


image     (19)


The state space at image consists of the same elements as in the static model augmented to include the amount of accumulated work experience, image.

The value function image can be written as the maximum over the two alternative-specific value functions, image, image


image     (20)


each of which obeys the Bellman equation


image     (21)


The expectation in (21) is taken over the distribution of the random components of the state space at image and image, conditional on the state space elements at image.

The latent variable in the dynamic case is the difference in alternative-specific value functions, image, namely24

image     (22)

image     (23)
25

Comparing the latent variable functions in the dynamic (22) and static (13) cases, the only difference is the appearance in the dynamic model of the difference in the future component of the expected value functions under the two alternatives. This observation was a key insight in the development of estimation approaches for DCDP models.

To calculate these alternative-specific value functions, note first that image, the observable part of the state space at image, is fully determined by image and the choice at image. Thus, one needs to be able to calculate image at all values of image that may be reached from the state space elements at image and a choice at image. A full solution of the dynamic programming problem consists, then, of finding image for all values of image at all image. We denote this function by image or image for short.

In the finite horizon model we are considering, the solution method is by backwards recursion. However, there are a number of additional details about the model that must first be addressed. Specifically, it is necessary to assume something about how the exogenous observable state variables evolve, that is, image.26 For ease of presentation, to avoid having to specify the transition processes of the exogenous state variables, we assume that image and image.

The number of young children, however, is obviously not constant over the life cycle. But, after the woman reaches the end of her fecund period, the evolution of image is non-stochastic.27 To continue the example, we restrict attention to the woman’s post-fecund period. Thus, during that period image is perfectly foreseen, although the future path of image at any image depends on the exact ages of the young children in the household at image.28 Thus, the ages of existing young children at image are elements of the state space at image.

As seen in (21), to calculate the alternative-specific value functions at period image for each element of image, we need to calculate what we have referred to above as image. Using the fact that, under normality, image and image, we get


image     (24)
29


Note that evaluating this expression requires an integration (the normal cdf) which has no closed form; it thus must be computed numerically. The right hand side of (24) is a function of image and image.30 Given a set of model parameters, the image function takes on a scalar value for each element of its arguments. Noting that image, and being explicit about the elements of image, the alternative-specific value functions at image are (dropping the image subscript for convenience):

image     (25)

image     (26)

Thus,

image     (27)

image     (28)

As before, because image enters both image and image additively, it drops out of image and thus out of image.31

To calculate the image alternative-specific value functions, we will need to calculate image. Following the development for period image,


image     (29)


The right hand side of (29) is a function of image and image. As with image, given a set of model parameters, the image function takes on a scalar value for each element of its arguments. Noting that image, the alternative-specific value functions at image and the latent variable function are given by

image     (30)

image     (31)

image     (32)

image     (33)

As at image drops out of image and thus image.

We can continue to solve backwards in this fashion. The full solution of the dynamic programming problem is the set of image functions for all image from image. These image functions provide all of the information necessary to calculate the cut-off values, the image’s that are the inputs into the likelihood function.

Estimation of the dynamic model requires that the researcher have data on work experience, image. More generally, assume that the researcher has longitudinal data for image married couples and denote by image and image the first and last periods of data observed for married couple image. Note that imageneed not be the first period of marriage (although it may be, subject to the marriage occurring after the woman’s fecund period) and image need not be the last (although it may be). Denoting image as the vector of model parameters, the likelihood function is given by


image     (34)


where image and image.32

Given joint normality of image and image, the likelihood function is analytic, namely


image     (35)


where image and where image is the correlation coefficient between image and image.33 Estimation proceeds by iterating between the solution of the dynamic programming problem and the likelihood function for alternative sets of parameters. Maximum likelihood estimates are consistent, asymptotically normal and efficient.

Given the solution of the dynamic programming problem for the cut-off values, the image’s, the estimation of the dynamic model is in principle no different than the estimation of the static model. However, the dynamic problem introduces an additional parameter, the discount factor, image, and additional assumptions about how households forecast future unobservables.34 The practical difference in terms of implementation is the computational effort of having to solve the dynamic programming problem in each iteration on the model parameters in maximizing the likelihood function.

Identification of the model parameters requires the same exclusion restriction as in the static case, that is, the appearance of at least one variable in the wage equation that does not affect the value of leisure. Work experience, image, would serve that role if it does not also enter into the value of leisure image. A heuristic argument for the identification of the discount factor can be made by noting that the difference in the future component of the expected value functions under the two alternatives in (22) is in general a nonlinear function of the state variables and depends on the same set of parameters as in the static case. Rewriting (22) as


image     (36)


where image is the difference in the future component of the expected value functions, the nonlinearities in image that arise from the distributional and functional form assumptions may be sufficient to identify the discount factor.35

As in the static model, identification of the model parameters implies that all three research goals previously laid out can be met. In particular, predictions of the theory are testable, the effects on participation of changes in observables that vary in the sample are estimable and a quantitative assessment of the counterfactual child care subsidy is feasible. The effect of such a subsidy will differ from that in a static model as any effect of the subsidy on the current participation decision will be transmitted to future participation decisions through the change in work experience and thus future wages. If a surprise (permanent) subsidy were introduced at some time image, the effect of the subsidy on participation at image would require that the couple’s dynamic programming problem be resolved with the subsidy from image to image and the solution compared to that without the subsidy. A pre-announced subsidy to take effect at image would require that the solution be obtained back to the period of the announcement because, given the dynamics, such a program would have effects on participation starting from the date of the announcement.36

Independent additive type-1 extreme value errors

When shocks are additive and come from independent type-1 extreme value distributions, as first noted by Rust (1987), the solution to the dynamic programming problem and the choice probability both have closed forms, that is, they do not require a numerical integration as in the additive normal error case. The cdf of an extreme value random variable image is image with mean equal to image, where image is Euler’s constant, and variance image.

Under the extreme value assumption, it can be shown that for period image (dropping the image subscript for convenience),

image     (37)

image     (38)

and for image,

image     (39)

image     (40)

where image denotes the vector of image values. The solution, as in the case of normal errors, consists of calculating the image functions by backwards recursion. As seen, unlike the case of normal errors, the image functions and the choice probabilities have closed form solutions; their calculation does not require a numerical integration.

The extreme value assumption is, however, somewhat problematic in the labor force participation model as structured. For there to be a closed form solution to the DCDP problem, the scale parameter (image), and thus the error variance, must be the same for both the preference shock and the wage shock, a rather strong restriction that is unlikely to hold. The root of the problem is that the participation decision rule depends on the wage shock. Suppose, however, that the participation model was modified so that the decision rule no longer included a wage shock. Such a modification could be accomplished in either of two ways, either by assuming that the wife’s wage offer is not observed at the time that the participation decision is made or that the wage is deterministic (but varies over time and across women due to measurement error). In the former case, the wage shock is integrated out in calculating the expected utility of working. while in the latter there is no wage shock entering the decision problem. Then, by adding an independent type-1 extreme value error to the utility when the wife works, the participation decision rule will depend on the difference in two extreme value taste errors, which leads to the closed form expressions given above.

In either case, there is no longer a selection issue with respect to observed wages. Because the observed wage shock is independent of the participation decision, the wage parameters can be estimated by adding the wage density to the likelihood function for participation and any distributional assumption, such as log normality, can be assumed. In addition, as in the case of normal errors, identification of the wage parameters, along with the exclusion restriction already discussed, implies identification of the rest of the model parameters (including the scale parameter). Thus, the three research goals are achievable. Whether the model assumptions necessary to take advantage of the computational gains from adopting the extreme value distribution are warranted raises the issue how models should be judged and which model is “best,” a subject we take up later in the chapter.

Unobserved state variables

We have already encountered unobserved state variables in the labor force participation model, namely the stochastic elements image in image that affect current choices. However, there may be unobserved state variables that have persistent effects through other mechanisms. Such a situation arises, for example, when the distribution of image is not independent of past shocks, that is, when image.

A specific example, commonly adopted in the literature, is when shocks have a permanent-transitory structure. For reasons of tractability, it is often assumed that the permanent component takes on a discrete number of values and follows a joint multinomial distribution. Specifically,

image     (41)

image     (42)

where there are image types each of husbands image and wives image, and thus image couple types and where image and image are joint normal and iid over time.37 Each wife’s type is assumed to occur with probability image and each husband’s type with probability image, with image for image. A couple’s type is defined by their value of image, where the probability of a couple being of type image is given by image, with image.38 A couple is assumed to know their own and their spouse’s type, so the state space is augmented by the husband’s and wife’s type. Even though types are not known to the researcher, it is convenient to add them to the state variables in what we previously defined as the observable elements of the state space, image. The reason is that, unlike the iid shocks image and image, which do not enter the image functions (they are integrated out), the types do enter the image functions. The dynamic programming problem must be solved for each couple’s type.

The likelihood function must also be modified to account for the fact that the types are unobserved. In particular, letting image be the likelihood function for a type image couple, the sample likelihood is the product over individuals of the type probability weighted sum of the type-specific likelihoods, namely


image     (43)


A second example is where the joint errors follows an ARIMA process. To illustrate, suppose that the errors follow a first-order autoregressive process, namely that image and image, where image and image are joint normal and iid over time. Consider again the alternative-specific value functions at image, explicitly accounting for the evolution of the shocks, namely


image     (44)


where the integration is now taken over the joint distribution of image and image. To calculate the alternative-specific value function at image, it is necessary that the image function include not only image, as previously specified, but also the shocks at image and image. Thus, serial correlation augments the state space that enters the image functions. The added complication is that these state space elements, unlike those we have so far considered, are continuous variables, an issue we discuss later. The likelihood function is also more complicated to calculate as it requires an integration for each couple of dimension equal to the number of observation periods (and there are two additional parameters, image and image).39

The existence of unobserved state variables creates also a potentially difficult estimation issue with respect to the treatment of initial conditions (Heckman, 1981). Having restricted the model to the period starting at the time the wife is no longer fecund, by that time most women will have accumulated some work experience, i.e., image will not be zero and will vary in the estimation sample. Our estimation discussion implicitly assumed that the woman’s “initial” work experience, that is, work experience at image, could be treated as exogenous, that is, as uncorrelated with the stochastic elements of the future participation decisions. When there are unobserved initial state variables, permanent types or serially correlated shocks, this assumption is unlikely to hold.

Although we have not specified the labor force participation model governing decisions prior to this period, to avoid accounting for fertility decisions, it is reasonable to suppose that women who worked more while they were of childbearing ages come from a different type distribution than women who worked less, or, in the case in which there are serially correlated shocks, women with greater work experience during the childbearing period may have experienced shocks (to wages or preferences) that are not uncorrelated with those that arise after. Put differently, it would seem much more reasonable to assume that the same model governs the participation decision during pre- and post-childbearing ages than to assume that there are two different models in which decisions across those periods are stochastically independent (conditional on observables).

There are several possible solutions to the initial conditions problem. Suppose for the sake of exposition, though unrealistically, that all women begin marriage with zero work experience.40 At the time of marriage, in the case of permanent unobserved heterogeneity, the couple is assumed to be “endowed” with a given set of preferences. A couple who intrinsically places a low value on the wife’s leisure will be more likely to choose to have the wife work and thus accumulate work experience. Such women will have accumulated more work experience upon reaching the end of their childbearing years than women in marriages where the wife’s value of leisure is intrinsically greater. Thus, when the end of the childbearing years are reached, there will be a correlation between the accumulated work experience of wives and the preference endowment, or type, of couples.

Suppose that participation decisions during the childbearing years were governed by the same behavioral model (modified to account for fertility) as those during the infecund observation period. In particular, suppose that given a couple’s type, all shocks (the image’s in (41) and (42)) are iid. In that case, work experience can be taken as exogenous conditional on a couple’s type. To condition the likelihood (43) on initial experience, we specify a type probability function conditional on work experience at the beginning of the infecund period. Specifically, we would replace image, taken to be scalar parameters in the likelihood function (43), with the type probability function image, where, as previously defined, image is the first (post-childbearing) period observed for couple image.41

The type probability function can itself be derived using Bayes’ rule starting from the true initial decision period (taken to be the start of marriage in this example). Specifically, denoting the couple’s endowment pair image as “type” and dropping the image subscript, because

image     (45)

image     (46)

the type probability function is


image     (47)


Estimating the type probability function image as a nonparametric function of image provides an “exact” solution (subject to sampling error) to the initial conditions problem, yielding type probabilities for each level of experience that would be the same as those obtained if we had solved and estimated the model back to the true initial period and explicitly used (47). Alternatively, because the type probabilities must also be conditioned on all other exogenous state variables (the image and image variables), perhaps making nonparametric estimation infeasible, estimating a flexible functional form would provide an “approximate” solution.

If the shocks are serially correlated, work experience at the start of the infecund period is correlated with future choices not only because it affects future wages, but also because of the correlation of stochastic shocks across fecund and infecund periods. In that case, as suggested by Heckman (1981) in a nonstructural setting, we would need to have data on exogenous initial conditions at the time of the true initial period (taken here to be the start of marriage), when the labor supply decision process is assumed to begin. Given that, we can specify a density for work experience as a function of those exogenous initial conditions at the start of marriage and incorporate it in the likelihood function.42

The curse of dimensionality

As we have seen, the solution of the dynamic programming problem required that the image functions be calculated for each point in the state space. If image and image take on only a finite number of discrete values (e.g., years of schooling, number of children), as does image, the solution method simply involves solving for the image functions at each point in the state space. However, if either image or image contains a continuous variable (or if the shocks follow an ARIMA process, as already discussed), the dimensionality of the problem is infinite and one obviously cannot solve the dynamic programming problem at every state point. Furthermore, one could imagine making the model more complex in ways that would increase the number of state variables and hence the size of the state space, for example, by letting the vector of taste shifters image include not just number of children but the number of children in different age ranges. In general, in a finite state space problem, the size of the state space grows exponentially with the number of state variables. This is the so-called curse of dimensionality, first associated with Bellman (1957).

Estimation requires that the dynamic programming problem be solved many times—once for each trial parameter vector that is considered in the search for the maximum of the likelihood function (and perhaps at many nearby parameter vectors, to obtain gradients used in a search algorithm). This means that an actual estimation problem will typically involve solving the DP problem thousands of times. Thus, from a practical perspective, it is necessary that one be able to obtain a solution rather quickly for estimation to be feasible. In practice, there are two main ways to do this. One is just to keep the model simple so that the state space is small. But, this precludes studying many interesting problems in which there are a large set of choices that are likely to be interrelated (for example, choices of fertility, labor supply, schooling, marriage and welfare participation).

A second approach, which a number of researchers have pursued in recent years, is to abandon “exact” solutions to DP problems in favor of approximate solutions that can be obtained with greatly reduced computational time. There are three main approximate solution methods that have been discussed in the literature:43

1. Discretization: This approach is applicable when the state space is large due to the presence of continuous state variables. The idea is straightforward: simply discretize the continuous variables and solve for the image functions only on the grid of discretized values. To implement this method one must either (i) modify the law of motion for the state variables so they stay on the discrete grid (e.g., one might work with a discrete AR(1) process) or (ii) employ a method to interpolate between grid points. Clearly, the finer the discretization, the closer the approximation will be to the exact solution. Discretization does not formally break the curse of dimensionality because the time required to compute an approximate solution still increases exponentially as the number of state variables increases. But it can be an effective way to reduce computation time in a model with a given number of state variables.

2. Approximation and interpolation of the image functions: This approach was originally proposed by Bellman et al. (1963) and extended to the type of models generally of interest to labor economists by Keane and Wolpin (1994). It is applicable when the state space is large either due the presence of continuous state variables or because there are a large number of discrete state variables (or both). In this approach the image functions are evaluated at a subset of the state points and some method of interpolation is used to evaluate image at other values of the state space. This approach requires that the image interpolating functions be specified parametrically. For example, they might be specified as some regression function in the state space elements or as some other approximating function such as a spline. Using the estimated values of the image rather than the true values is akin to having a nonlinear model with specification error. The degree of approximation error is, however, subject to control. In a Monte Carlo study, Keane and Wolpin (1994) provide evidence on the effect of this approximation error on the bias of the estimated model parameters under alternative interpolating functions and numbers of state points. Intuitively, as the subset of the state points that are chosen is enlarged and the dimension of the approximating function is increased, the approximation will converge to the true solution.44

As with discretization, the approximation/interpolation method does not formally break the curse of dimensionality, except in special cases. This is because the curse of dimensionality applies to polynomial approximation (see Rust (1997)). As the number of state variables grows larger, the computation time needed to attain a given accuracy in a polynomial approximation to the Emax function grows exponentially.45 Despite this, the Keane and Wolpin (1994) approach (as well as some closely related variants) has proven to be a useful way to reduce computation time in models with large state spaces, and it has been widely applied in recent years. Rather than describe the method in detail here, we will illustrate the method later in a specific application.

3. Randomization: This approach was developed by Rust (1997). It is applicable when the state space is large due the presence of continuous state variables, but it requires that choice variables be discrete and that state variables be continuous. It also imposes important constraints on how the state variables may evolve over time. Specifically, Rust (1997) shows that solving a random Bellman equation can break the curse of dimensionality in the case of DCDP models in which the state space is continuous and evolves stochastically, conditional on the alternative chosen. Note that because work experience is discrete and evolves deterministically in the labor force participation model presented above, this method does not strictly apply. But, suppose instead that we modeled work experience as a continuous random variable with density function image where image is random variable indicating the extent to which working probabilistically augments work experience or not working depletes effective work experience (due to depreciation of skills). The random Bellman equation (ignoring image and image), the analog of (20), is in that case given by


image     (48)


where image are image randomly drawn state space elements. The approximate value function image converges to image as image at a image rate. Notice that this is still true if image is a vector of state variables, regardless of the dimension of the vector. Thus, the curse of dimensionality is broken here, exactly analogously to the way that simulation breaks the curse of dimensionality in approximation of multivariate integrals (while discretization methods and quadrature do not). 46

The above approach only delivers a solution for the value functions on the grid image. But forming a likelihood will typically require calculating value functions at other points. A key point is that image is, in Rust’s terminology, self-approximating. Suppose we wish to construct the alternative specific value function image at a point image that is not part of the grid image. Then we simply form:


image     (49)


Notice that, because any state space element at image can be reached from any element at image with some probability given by image, the value function at image can be calculated from (49) at any element of the state space at image. In contrast to the methods of approximation described above, the value function does not need to be interpolated using an auxiliary interpolating function.47 This “self-interpolating” feature of the random Bellman equation is also crucial for breaking the curse of dimensionality (which, as noted above, plagues interpolation methods).

Of course, the fact that the randomization method breaks the curse of dimensionality does not mean it will outperform other methods in specific problems. That the method breaks the curse of dimensionality is a statement about its behavior under the hypothetical scenario of expanding the number of state variables. For any given application with a given number of state variables, it is an empirical question whether a method based on discretization, approximation/interpolation or randomization will produce a more accurate approximation in given computation time.48 Obviously more work is needed on comparing alternative approaches.49

3.2 The multinomial dynamic discrete choice problem

The structure of the labor force decision problem described above was kept simple to provide an accessible introduction to the DCDP methodology. In this section, we extend that model to allow for:

(i) additional choices;
(ii) nonadditive errors;
(iii) general functional forms and distributional assumptions.

The binary choice problem considers two mutually exclusive alternatives, the multinomial problem more than two. The treatment of static multinomial choice problems is standard. The dynamic analog to the static multinomial choice problem is conceptually no different than in the binary case. In terms of its representation, it does no injustice to simply allow the number of mutually exclusive alternatives, and thus the number of alternative-specific value functions in (21), to be greater than two. Analogously, if there are image mutually exclusive alternatives, there will be image latent variable functions (relative to one of the alternatives, arbitrarily chosen). The static multinomial choice problem raises computational issues with respect to the calculation of the likelihood function. Having to solve the dynamic multinomial choice problem, that is, for the image function that enters the multinomial version of (21) at all values of image and at all image, adds significant computational burden.

For concreteness, we consider the extension of DCDP models to the case with multiple discrete alternatives by augmenting the dynamic labor force participation model to include a fertility decision in each period so that the model can be extended to childbearing ages. In addition, to capture the intensive work margin, we allow the couple to choose among four labor force alternatives for the wife. We also drop the assumption that errors are additive and normal. In particular, in the binary model we assumed, rather unconventionally, that the wage has an additive error in levels. The usual specification (based on both human capital theory and on empirical fit) is that the log wage has an additive error.50 Although it is necessary to impose functional form and distributional assumptions to solve and estimate DCDP models, it is not necessary to do so to describe solution and estimation procedures. We therefore do not impose such assumptions, reflecting the fact that the researcher is essentially unconstrained in the choice of parametric and distributional assumptions (subject to identification considerations).

The following example also illustrates the interplay between model development and data. The development of a model requires that the researcher decide on the choice set, on the structural elements of the model and on the arguments of those structural elements. In an ideal world, a researcher, based on prior knowledge, would choose a model, estimate it and provide a means to validate it. However, in part because there are only a few data sets on which to do independent validations and in part because it is not possible to foresee where models will fail to fit important features of data, the process by which DCDP models are developed and empirically implemented involves a process of iterating among the activities of model specification, estimation and model validation (for example, checking model fit). Any empirical researcher will recognize this procedure regardless of whether the estimation approach is structural or nonstructural.

A researcher who wished to study the relationship between fertility and labor supply of married women would likely have in mind some notion of a model, and, in that context, begin by exploring the data. A reasonable first step would be to estimate regressions of participation and fertility as functions of “trial” state variables, interpreted as approximations to the decision rules in a DCDP model. 51 As an example, consider a sample of white married women (in their first marriage) taken from the 1979-2004 rounds of the NLSY79. Ages at marriage range from 18 to 43, with 3/4ths of these first marriages occurring before the age of 27. We adopt, as is common in labor supply models, a discrete decision period to be a year. 52 The participation measure consists of four mutually exclusive and exhaustive alternatives, working less than 500 hours during a calendar year image, working between 500 and 1499 hours image, working between 1500 and 2499 hours image and working more than 2500 hours image.53 The fertility measure is the dichotomous variable indicating whether or not the woman had a birth during the calendar year. The approximate decision rule for participation is estimated by an ordered probit and the fertility decision rule by a binary probit. The variables included in these approximate decision rules, corresponding to the original taxonomy in section II, are image {total hours worked up to image, hours worked in image, whether a child was born in image, number of children born between image and image, number of children ever born, image (years of marriage up to image)} and image = {age of wife, age of spouse, schooling of wife, schooling of spouse}. Consistent with any DCDP model, the same state variables enter the approximate decision rules for participation and for fertility. As seen in Table 1, the state variables appear to be related to both decision variables and in reasonable ways.54

Table 1 Employment and fertility of married (white) women: NLSY79

Employment hours (ordered probit) a Fertility (probit) b
Work experience (hours) 4.09 E −05 8.32E −06
(3.22E −06) c (4.33E −06)
Hours image 1.04 −0.047
(0.042) (0.051)
Hours image = 2 1.90 −0.126
(0.049) (0.051)
Hours image 3.16 −0.222
(0.110) (0.089)
Age −0.075 0.211
(0.008) (0.035)
Age squared (−0.004)
(0.0005)
Birth image −0.497 −0.320
(0.047) (0.778)
Births (image to image) −0.349 0.448
(0.031) (0.054)
Total births 0.099 −0.337
(0.028) (0.061)
Schooling 0.077 0.004
(0.009) (0.011)
Age of spouse 0.007 −0.016
(0.004) (0.004)
Schooling of spouse −0.036 0.021
(0.007) (0.010)
Marital duration −0.025 −0.015
(0.006) (0.008)
Constant −3.41
(0.497)
Cut point −0.888
(0.171)
Cut point 0.076
(0.172)
Cut point 2.48
(0.175)
Pseudo R2 .295 .094

a 8183 person-period observations.

b 8786 person-period observations.

c Robust standard errors in parenthesis.

Suppose the researcher is satisfied that the state variables included in the approximate decision rules should be included in the DCDP model. The researcher, however, has to make a choice as to where in the set of structural relationships the specific state variables should appear: the utility function, the market wage function, the husband’s earnings function and/or the budget constraint. The researcher also must decide about whether and where to include unobserved heterogeneity and/or serially correlated errors. Some of these decisions will be governed by computational considerations. Partly because of that and partly to avoid overfitting, researchers tend to begin with parsimonious specifications in terms of the size of the state space. The “final” specification evolves through the iterative process described above.

As an example, let the married couple’s per-period utility flow include consumption image, a per-period disutility from each working alternative and a per-period utility flow from the stock of children image. The stock of children includes a newborn, that is a child born at the beginning of period image. Thus,


image     (50)


where the image, and image are time-varying preference shocks associated with each of the four choices that are assumed to be mutually serially uncorrelated. Allowing for unobserved heterogeneity, the type specification is (following (41))


image     (51)


where the image’s are mutually serially independent shocks.

The household budget constraint incorporates a cost of avoiding a birth (contraceptive costs, image), which, for biological reasons, will be a function of the wife’s age (her age at marriage, image, plus the duration of marriage, image) and (child) age-specific monetary costs of supplying children with consumption goods image and with child care if the woman works (image per work hour). Household income is the sum of husband’s earnings image and wife’s earnings, the product of an hourly wage image and hours worked (1000 hours if image, 2000 hours if image, 3000 hours image). Specifically, the budget constraint is


image     (52)


where image are the number of children in image different age classes, e.g., 0-1, 2-5, etc.55 To simplify, we do not allow for uncertainty about births. A couple can choose to have a birth (with probability one) and thus not pay the contraceptive cost or choose not to have a birth (with probability one) and pay the avoidance cost.56

The wife’s Ben Porath-Griliches wage offer function depends on her level of human capital, image, which is assumed to be a function of the wife’s completed schooling image, assumed fixed after marriage, the wife’s work experience, that is, the number of hours worked up to image, and on the number of hours worked in the previous period:

image     (53)

image     (54)

where the image are (assumed to be time-invariant) competitively determined skill rental prices that may differ by hours worked and image is a time varying shock to the wife’s human capital following a permanent (discrete type)-transitory scheme.57 Husband’s earnings depends on his human capital according to:

image     (55)

image     (56)

where image is the husband’s schooling and image is his age at image (his age at marriage plus image).58

The time-varying state variables, the stock of children (older than one) of different ages, the total stock of children and work experience, evolve according to:

image     (57)

image     (58)

The state variables in image, augmented to include type, consist of the stock of children (older than one) of different ages, the wife’s work experience and previous period work status, the husband’s and wife’s age at marriage, the husband and wife’s schooling levels and the couple’s type. The choice set during periods when the wife is fecund, assumed to have a known terminal period image, consists of the four work alternatives plus the decision of whether or not to have a child. There are thus eight mutually exclusive choices, given by image, where the first superscript refers to the work choice image and the second to the fertility choice image.59 When the wife is no longer fecund, image and the choice set consists only of the four mutually exclusive alternatives, image.

The objective function of the couple is, as in the binary case, to choose the mutually exclusive alternative at each image that maximizes the remaining expected discounted value of the couple’s lifetime utility. Defining image to be the contemporaneous utility flow for the work and fertility choices, the alternative-specific value functions for the multinomial choice problem are


image     (59)


where, letting image be the vector of alternative specific value functions relevant at period image,


image     (60)


and where the expectation in (59) is taken over the joint distribution of the preference and income shocks, image.60 The image’s may have a general contemporaneous correlation structure, but, as noted, are mutually serially independent.

The model is solved by backwards recursion. The solution requires, as in the binary case, that the image function be calculated at each state point and for all image. In the model as it is now specified, the image function is a six-variate integral (over the preference shocks, the wife’s wage shock and the husband’s earnings shock). The state space at image consists of all feasible values of image. Notice that all of the state variables are discrete and the dimension of the state space is therefore finite. However, the state space, though finite, is huge. The reason is that to keep track of the number of children in each of the three age groups, it is necessary to keep track of the complete sequence of births. If a woman has say 30 fecund periods, the number of possible birth sequences is 230= 1,073,700,000. Even without multiplying by the dimension of the other state variables, full solution of the dynamic programming problem is infeasible, leaving aside the iterative process necessary for estimation.

It is thus necessary to use an approximation method, among those previously discussed, for solving the dynamic programming problem, that is, for solving for the image functions. As an illustration, we present an interpolation method based on regression. To see how it works, consider first the calculation of the image for any given state space element. At image the woman is no longer fecund, so we need to calculate


image     (61)


where image is the six-tuple vector of shocks. Although this expression is a six-variate integration, at most four of the shocks actually affect image for any given image choice. Given the lack of a closed form expression, image must be calculated numerically. A straightforward method is Monte Carlo integration. Letting image be the image random draw, image, from the joint distributionimage, an estimate of image at say the imageth value of the state space in image, is


image     (62)


Given the infeasibility of calculating imageat all points in the state space, suppose one randomly draws image state points (without replacement) and calculates the image function for those image state space elements according to (62). We can treat these image values of image as a vector of dependent variables in an interpolating regression


image     (63)


where image is a time image vector of regression coefficients and image is a flexible function of state variables.61 With this interpolating function in hand, estimates of the image function can be obtained at any state point in the set image.

Given image, we can similarly calculate image at a subset of the state points in image. Using the image draws from image, the estimate of image at the imageth state space element is


image     (64)


where image is given by (59). Using the image calculated for image randomly drawn state points from image as the dependent variables in the interpolating function,


image     (65)


provides estimated values for the image function at any state point in the set image.62 Continuing this procedure, we can obtain the interpolating functions for all of the image functions for all image from image (the age at which the woman becomes infertile) through image, that is, image.

At image, the choice set now includes the birth of a child. All of the image functions from image to image require numerical integrations over the eight mutually exclusive choices based on the joint error distribution image. At any image within the fecund period, at the imageth state point,


image     (66)


Again taking image random draws from the state space at image, we can generate interpolating functions:63


image     (67)


In the binary case with additive normal errors, the cut-off values for the participation decision, which were the ingredients for the likelihood function calculation, were analytical. Moreover, although the likelihood function (35) did not have a closed form representation, it required the calculation only of a univariate cumulative normal distribution. In the multinomial choice setting we have described, the set of values of the image vector determining optimal choices and serving as limits of integration in the probabilities associated with the work alternatives that comprise the likelihood function have no analytical form and the likelihood function requires a multivariate integration.

To accommodate these complications, maximum likelihood estimation of the model uses simulation methods. To describe the procedure, let the set of values of image for which the imageth choice is optimal at image be denoted by image. Consider the probability that a couple chooses neither to work nor have a child, image, in a fecund period image:


image     (68)


This integral can be simulated by randomly taking image draws from the joint distribution of image, with draws denoted by image, and determining the fraction of times that the value function for that alternative is the largest among all eight feasible alternatives, that is,


image     (69)


One can similarly form an estimate of the probability for other nonwork alternatives, namely for image for any image and for image for any image. Recall that for infecund periods, there are only four alternatives because image is constrained to be zero.

When the wife works, the relevant probability contains the chosen joint alternative image and the observed wage. For concreteness, consider the case where image, image. Then the likelihood contribution for an individual who works 2000 hours in period image at a wage of image is

image     (70)

image     (71)

For illustrative purposes, suppose that the (log) wage equation is additive in image,


image     (72)


and further that image is joint normal.64 With these assumptions, and denoting the deterministic part of the right hand side of (72) by image, we can write


image     (73)


where image is the Jacobian of the transformation from the distribution of image to the distribution of image. Under these assumptions image is normal and the frequency simulator for the conditional probability takes the same form as (69) except that image is set equal to image and the other five image’s are drawn from image. Thus, denoting the fixed value of image as image,


image     (74)


Although these frequency simulators converge to the true probabilities as image, there is a practical problem in implementing this approach. Even for large image, the likelihood is not smooth in the parameters, which precludes the use of derivative methods (e.g., BHHH). This lack of smoothness forces the use of non-derivative methods, which converge more slowly. However, frequency simulators can be smoothed, which makes the likelihood function differentiable and improves the performance of optimization routines. One example is the smoothed logit simulator (McFadden, 1989), namely (in the case we just considered),


image     (75)


where image is shorthand for the value functions in (74) and image is a smoothing parameter. As image, the RHS converges to the frequency simulator. The other choice probabilities associated with work alternatives are similarly calculated.

3.2.1 Alternative estimation approaches

Conceptually, any dynamic programming problem that admits to numerical solution can be estimated. In addition to simulated maximum likelihood, researchers have used various alternative simulation estimation methods, including minimum distance estimation, simulated method of moments and indirect inference. There is nothing in the application of these estimation methods to DCDP models that is special, other than having to iterate between solving the dynamic programming problem and minimizing a statistical objective function.

The main limiting factor in estimating DCDP models is the computational burden associated with the iterative process. It is therefore not surprising that there have been continuing efforts to reduce the computational burden of estimating DCDP models. We briefly review two such methods.

A Bayesian approach

As has been discussed elsewhere (see Geweke and Keane (2000)), it is difficult to apply the Bayesian approach to inference in DCDP models because the posterior distribution of the model parameters given the data is typically intractably complex. Recently, however, computationally practical Bayesian approaches that rely on Markov Chain Monte Carlo (MCMC) methods have been developed by Imai et al. (2009) and Norets (2009). We will discuss the Imai et al. (2009) approach in the stationary case, where it is most effective. Thus, we remove time superscripts from the value functions and denote image as the next period state. We also make the parameter vector image explicit. Thus, corresponding to Eq. (20) and the (21), we have


image     (76)


where


image     (77)


The basic idea is to treat not only the parameters but also the values functions and expected value functions as objects that are to be updated on each iteration of the MCMC algorithm. Hence, we add the superscript image to the value functions, the expected value functions and the parameters to denote the values of these objects on iteration image. We use image to denote the approximation to the expected value and image to denote the likelihood.

The Imai et al. (2009) algorithm consists of three steps: the parameter update step (using the Metropolis-Hastings algorithm), the Dynamic Programming step, and the expected value approximation step:

(1) The Parameter Updating Step (Metropolis-Hastings algorithm)

First, draw a candidate parameter vector from the proposal density image. Then, evaluate the likelihood conditional on image and conditional on image. Now, form the acceptance probability


image     (78)


We then accept image with probability image, that is,


image     (79)


(2) The Dynamic Programming (or Bellman equation iteration) Step

The following Bellman equation step is nested within the parameter updating step:

image     (80)

image     (81)

The difficulty here is in obtaining the expected value function approximation that appears on the right hand side of (81). We describe this next.

(3) Expected value approximation step.

The expected value function approximation is computed using information from earlier iterations of the MCMC algorithm. The problem is that, on iteration (s), they have not, in general, yet calculated the value functions at the specific parameter value image that they have drawn on iteration (s). Intuitively, the idea is to approximate the expected value functions at image by looking at value functions that were already calculated on earlier iterations of the MCMC algorithm, emphasizing parameter values that are in some sense “close” to image.

Specifically, the expected value function is approximated as


image     (82)


where image denotes a parameter value from an earlier iteration image of the MCMC algorithm and image is the value function at state point image that was calculated on iteration image.65 Finally, image is a weighting function that formalizes the notion of closeness between image and image. Imai et al. (2009) use weighting function given by


image     (83)


where image is a kernel with bandwidth image.

Under certain conditions, as the number of iterations grows large, the output of this algorithm generates convergence to the posterior distribution of the parameter vector, as well as convergence to the correct (state and parameter contingent) value functions. One condition is “forgetting.” That is, the algorithm will typically be initialized using rather arbitrary initial value functions. Hence, the sum in (82) should be taken using a moving window of more recent iterations so early iterations are dropped. Another key point is that, as one iterates, more lagged values of image become available, so more values that are “close” to the current image will become available. Hence, the bandwidth in the kernel smoother in (83) should become narrower as one iterates. Note that satisfying both the “forgetting” and “narrowing” conditions simultaneously requires that the “moving window” mentioned earlier must expand as one iterates, but not too quickly. Norets (2009) and Imai et al. (2009) derive precise rates.

The Bayesian methods described here are in principle applicable to non-stationary models as well. This should be obvious given that a non-stationary model can always be represented as a stationary model with (enough) age specific variables included in the state space. However, this creates the usual curse of dimensionality, as the state space may expand substantially as a result. Unlike, say, the approximate solution algorithm proposed by Keane and Wolpin (1994), these Bayesian algorithms are not designed (or intended) to be methods for handling extremely large state space problems. Combining the two ideas is a useful avenue for future research.

It is worth noting that no DCDP work that we are aware of has ever reported a distribution of policy simulations that accounts for parameter uncertainty; and, it is also rarely done in nonstructural work.66 The Bayesian approach provides a natural way to do this, and Imai et al. (2009) have produced code that generates such a distribution.

A non-full solution method

Hotz and Miller (1993) developed a method for the implementing DCDP models that does not involve solving the DP model, that is, calculating the image functions. HM prove that, for additive errors, the image functions can be written solely as functions of conditional choice probabilities and state variables for any joint distribution of additive shocks. Although the method does not require that errors be distributed extreme value, the computational advantage of the method is best exploited under that assumption.

Consider again the binary choice model.67 From (38), one can see that if we have an estimate of the conditional choice probabilities at all state points, image can also be calculated at all state points. Denoting the (estimate of the) conditional choice probability by image,


image     (84)


Consider now period image and suppose we have an estimate of the conditional choice probabilities, image. Then,


image     (85)


where, for convenience, we have included only work experience in the image function. We can continue substituting the estimated conditional choice probabilities in this recursive manner, yielding at any image


image     (86)


These image functions can be used in determining the image cut-off values that enter the likelihood function.

As with other approaches, there are limitations. First, the empirical strategy involves estimating the conditional choice probabilities from the data (nonparametrically if the data permit). In the case at hand, the conditional choice probabilities correspond to the proportion of women who work for given values of the state variables (for example, for all levels of work experience). To implement this procedure, one needs estimates of the conditional choice probabilities through the final decision period and for each possible value of the state space. Thus, we need longitudinal data that either extends to the end of the decision period or we need to assume that the conditional choice probabilities can be obtained from synthetic cohorts. This latter method requires an assumption of stationarity, that is, in forecasting the conditional choice probabilities of a 30 year old observed in year image when reaching age 60 in year image, it’s assumed that the 30 year old would face the same decision-making environment (for example, the same wage offer function, etc.) as the 60 year old observed in year image. Most DCDP models in the literature which solve the full dynamic programming problem implicitly make such an assumption as well, though it is not dictated by the method.68 Moreover, it must also be assumed that there are no state variables observed to the agent but unobserved to us; otherwise, we will not be matching the 30 year olds to the 60 year olds the same unobserved state values.69 Second, the convenience of using additive extreme value errors brings with it the previously discussed limitations of that assumption. Third, the estimates are not efficient, because the fact that the image functions themselves contain the parameters in the model structure are not taken into account.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.194.230