首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose and examine a panel data model for isolating the effect of a treatment, taken once at baseline, from outcomes observed over subsequent time periods. In the model, the treatment intake and outcomes are assumed to be correlated, due to unobserved or unmeasured confounders. Intake is partly determined by a set of instrumental variables and the confounding on unobservables is modeled in a flexible way, varying both by time and treatment state. Covariate effects are assumed to be subject-specific and potentially correlated with other covariates. Estimation and inference is by Bayesian methods that are implemented by tuned Markov chain Monte Carlo methods. Because our analysis is based on the framework developed by Chib [2004. Analysis of treatment response data without the joint distribution of counterfactuals. Journal of Econometrics, in press], the modeling and estimation does not involve either the unknowable joint distribution of the potential outcomes or the missing counterfactuals. The problem of model choice through marginal likelihoods and Bayes factors is also considered. The methods are illustrated in simulation experiments and in an application dealing with the effect of participation in high school athletics on future labor market earnings.  相似文献   

2.
Does the use of information on the past history of the nominal interest rates and inflation entail improvement in forecasts of the ex ante real interest rate over its forecasts obtained from using just the past history of the realized real interest rates? To answer this question we set up a univariate unobserved components model for the realized real interest rates and a bivariate model for the nominal rate and inflation which imposes cointegration restrictions between them. The two models are estimated under normality with the Kalman filter. It is found that the error-correction model provides more accurate one-period ahead forecasts of the real rate within the estimation sample whereas the unobserved components model yields forecasts with smaller forecast variances. In the post-sample period, the forecasts from the bivariate model are not only more accurate but also have tighter confidence bounds than the forecasts from the unobserved components model.  相似文献   

3.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

4.
Causality: a Statistical View   总被引:1,自引:0,他引:1  
Statistical aspects of causality are reviewed in simple form and the impact of recent work discussed. Three distinct notions of causality are set out and implications for densities and for linear dependencies explained. The importance of appreciating the possibility of effect modifiers is stressed, be they intermediate variables, background variables or unobserved confounders. In many contexts the issue of unobserved confounders is salient. The difficulties of interpretation when there are joint effects are discussed and possible modifications of analysis explained. The dangers of uncritical conditioning and marginalization over intermediate response variables are set out and some of the problems of generalizing conclusions to populations and individuals explained. In general terms the importance of search for possibly causal variables is stressed but the need for caution is emphasized.  相似文献   

5.
This paper discusses identification based on difference‐in‐differences (DiD) approaches with multiple treatments. It shows that an appropriate adaptation of the common trend assumption underlying the DiD strategy for the comparison of two treatments restricts the possibility of effect heterogeneity for at least one of the treatments. The required assumption of effect homogeneity is likely to be violated because of non‐random assignment to treatment based on both observables and unobservables. However, this paper shows that, under certain conditions, the DiD estimate comparing two treatments identifies a lower bound in absolute values on the average treatment effect on the treated compared to the unobserved non‐treatment state, even if effect homogeneity is violated. This is possible if the treatments have ordered treatment effects, that is, in expectation, the effects of both treatments compared to no treatment have the same sign, and one treatment has a stronger effect than the other treatment on the respective recipients. Such assumptions are plausible if treatments are ordered or vary in intensity.  相似文献   

6.
This paper investigates to what extent the persistence of Microsoft Windows in the market for server operating systems is due to lock-in or unobserved preferences. While the hypothesis of lock-in plays an important role in the antitrust policy debate for the operating systems market, it has not been extensively documented empirically. To account for unobserved preferences, we use a panel data identification approach based on time-variant group fixed effects, and estimate the dynamic discrete choice panel data model developed by Arellano and Carrasco (2003). Using detailed establishment-level data, we find that once we account for unobserved preferences, the estimated magnitudes of lock-in are considerably smaller than those from the conventional approaches, suggesting that unobserved preferences play a major role in the persistence of Windows. Further robustness checks are consistent with our findings.  相似文献   

7.
This paper proposes a general framework for the analysis of survey data with missing observations. The approach presented here treats missing data as an unavoidable feature of any survey of the human population and aims at incorporating the unobserved part of the data into the analysis rather than trying to avoid it or make up for it. To handle coverage error and unit non-response, the true distribution is modeled as a mixture of an observable and of an unobservable component. Generally, for the unobserved component, its relative size (the no-observation rate) and its distribution are not known. It is assumed that the goal of the analysis is to assess the fit of a statistical model, and for this purpose the mixture index of fit is used. The mixture index of fit does not postulate that the statistical model of interest is able to account for the entire population rather, that it may only describe a fraction of it. This leads to another mixture representation of the true distribution, with one component from the statistical model of interest and another unrestricted one. Inference with respect to the fit of the model, with missing data taken into account, is obtained by equating these two mixtures and asking, for different no-observation rates, what is the largest fraction of the population where the statistical model may hold. A statistical model is deemed relevant for the population, if it may account for a large enough fraction of the population, assuming the true (if known) or a sufficiently small or a realistic no-observation rate.  相似文献   

8.
The paper estimates a large‐scale mixed‐frequency dynamic factor model for the euro area, using monthly series along with gross domestic product (GDP) and its main components, obtained from the quarterly national accounts (NA). The latter define broad measures of real economic activity (such as GDP and its decomposition by expenditure type and by branch of activity) that we are willing to include in the factor model, in order to improve its coverage of the economy and thus the representativeness of the factors. The main problem with their inclusion is not one of model consistency, but rather of data availability and timeliness, as the NA series are quarterly and are available with a large publication lag. Our model is a traditional dynamic factor model formulated at the monthly frequency in terms of the stationary representation of the variables, which however becomes nonlinear when the observational constraints are taken into account. These are of two kinds: nonlinear temporal aggregation constraints, due to the fact that the model is formulated in terms of the unobserved monthly logarithmic changes, but we observe only the sum of the monthly levels within a quarter, and nonlinear cross‐sectional constraints, since GDP and its main components are linked by the NA identities, but the series are expressed in chained volumes. The paper provides an exact treatment of the observational constraints and proposes iterative algorithms for estimating the parameters of the factor model and for signal extraction, thereby producing nowcasts of monthly GDP and its main components, as well as measures of their reliability.  相似文献   

9.
The aim of this paper is to convey to a wider audience of applied statisticians that nonparametric (matching) estimation methods can be a very convenient tool to overcome problems with endogenous control variables. In empirical research one is often interested in the causal effect of a variable X on some outcome variable Y . With observational data, i.e. in the absence of random assignment, the correlation between X and Y generally does not reflect the treatment effect but is confounded by differences in observed and unobserved characteristics. Econometricians often use two different approaches to overcome this problem of confounding by other characteristics. First, controlling for observed characteristics, often referred to as selection on observables, or instrumental variables regression, usually with additional control variables. Instrumental variables estimation is probably the most important estimator in applied work. In many applications, these control variables are themselves correlated with the error term, making ordinary least squares and two-stage least squares inconsistent. The usual solution is to search for additional instrumental variables for these endogenous control variables, which is often difficult. We argue that nonparametric methods help to reduce the number of instruments needed. In fact, we need only one instrument whereas with conventional approaches one may need two, three or even more instruments for consistency. Nonparametric matching estimators permit     consistent estimation without the need for (additional) instrumental variables and permit arbitrary functional forms and treatment effect heterogeneity.  相似文献   

10.
We examine provider and patient behavior in a dynamic model where effort is noncontractible, competition between providers is modeled in an explicit way and where patients' outside options are solved for in equilibrium. Physicians are characterized by an individual-specific ethical constraint which allows for unobserved heterogeneity. This introduces uncertainty in the patient's expected treatment if he were to leave his current physician to seek care elsewhere. We also introduce switching costs and uncertainty in the treatment–outcome relationship. Our model generates equilibria with treatment heterogeneity, unstable physician–patient relationships, and overtreatment (a form of defensive medicine).  相似文献   

11.
This article [1] suggests that the macro and micro approaches to organizational analysis represent different perspectives of the management of organizations. Differences and crossover points in the two approaches are discussed in terms of research focus, unit of analysis, and application. An integrative approach is presented in a framework which brings together key aspects of the macro and micro schools of thought. Data from a preliminary test are presented, and it is suggested that our understanding of organizations and their management may be enhanced when a holistic perspective is utilized.  相似文献   

12.
Growth, cycles and convergence in US regional time series   总被引:1,自引:0,他引:1  
This article reports the results of fitting unobserved components (structural) time series models to data on real income per capita in eight regions of the United States. The aim is to establish stylised facts about cycles and convergence. It appears that while the cycles are highly correlated, the two richest regions have been diverging from the others in recent years. A new model is developed in order to characterise the converging behaviour of the six poorest regions. The model combines convergence components with a common trend and cycles. These convergence components are formulated as a second-order error correction mechanism which allows temporary divergence while imposing eventual convergence. After fitting the model, the implications for forecasting are examined. Finally, the use of unit root tests for testing convergence is critically assessed in the light of the stylised facts obtained from the fitted models.  相似文献   

13.
Over time, economic statistics are refined. This implies that data measuring recent economic events are typically less reliable than older data. Such time variation in measurement error affects optimal forecasts. Measurement error, and its time variation, are of course unobserved. Our contribution is to show how estimates of these can be recovered from the variance of revisions to data using a behavioural model of the statistics agency. We illustrate the gains in forecasting performance from exploiting these estimates using a real‐time dataset on UK aggregate expenditure data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
We test a job ladders theory of career progression within internal labor markets as developed by Lazear and Rosen (1990). The theory argues that gender promotion gaps are due to sorting of men and women into career tracks with different promotion opportunities based on ex ante quit probabilities. Analyzing US federal government employees using a dynamic unobserved panel data model, we find that job assignment is one of the strongest predictors of gender differences in promotion. We also find that women have to jump higher performance hurdles to promote across grades, but, within grades, their promotion probabilities are comparable to those of men. In this organization, women can be found in both fast- and slow-track jobs, based on their promotion history, suggesting that unobserved heterogeneity is revealed to the firm over the worker's career.  相似文献   

15.
Models for the 12‐month‐ahead US rate of inflation, measured by the chain‐weighted consumer expenditure deflator, are estimated for 1974–98 and subsequent pseudo out‐of‐sample forecasting performance is examined. Alternative forecasting approaches for different information sets are compared with benchmark univariate autoregressive models, and substantial out‐performance is demonstrated including against Stock and Watson's unobserved components‐stochastic volatility model. Three key ingredients to the out‐performance are: including equilibrium correction component terms in relative prices; introducing nonlinearities to proxy state‐dependence in the inflation process and replacing the information criterion, commonly used in VARs to select lag length, with a ‘parsimonious longer lags’ parameterization. Forecast pooling or averaging also improves forecast performance.  相似文献   

16.
We present new Monte Carlo evidence regarding the feasibility of separating causality from selection within non-experimental duration data, by means of the non-parametric maximum likelihood estimator (NPMLE). Key findings are: (i) the NPMLE is extremely reliable, and it accurately separates the causal effects of treatment and duration dependence from sorting effects, almost regardless of the true unobserved heterogeneity distribution; (ii) the NPMLE is normally distributed, and standard errors can be computed directly from the optimally selected model; and (iii) unjustified restrictions on the heterogeneity distribution, e.g., in terms of a pre-specified number of support points, may cause substantial bias.  相似文献   

17.

This study estimates the technical efficiency measures of maize producing farm households in Ethiopia using stochastic frontier (SF) panel models that take different approaches to model firm heterogeneity. The efficiency measures are found to vary depending on how the estimation model treats both unobserved and observed firm heterogeneity. Estimates from the ‘true’ random effects (TRE) models that treat firm effects as heterogeneity are found to be identical to those from pooled SF models. Those results differ from the ones generated from the basic random effects (RE) models that treat firm effects as part of overall technical inefficiency. The more flexible generalised ‘true’ random effects (GTRE) model that splits the error term into firm effects, persistent inefficiency, transient inefficiency, and a random noise component indicates the presence of higher levels of persistent inefficiency than transient inefficiency. The basic truncated-normal RE model and heteroscedastic RE model yields similar efficiency estimates. The GTRE model predict persistent efficiency measures similar to those from the basic RE and flexible RE model with environmental variables incorporated in the variance function as well as in the deterministic production frontier. These results imply that the RE and GTRE panel models provide reliable efficiency estimates for our data compared to the TRE models. All the estimated SF models generate comparable production function parameters in terms of magnitude and sign. Overall, the results underscore the importance of scrutinising stochastic frontier models for their reliability of analytical results before drawing policy inferences.

  相似文献   

18.
A bivariate Markov-switching model identifies two regimes in the futures-price and risk-premium models. The persistent underlying states have very different implications for spot and risk-premium forecasts. In the “low” state, a positive bias predicts spot price appreciation. The “high” state is associated with lower spot appreciation and higher risk premiums. The regime-switching framework provides a new perspective on the intertemporal role of gold as a hedge or safe-haven asset. The gold spot-price appreciation regime is shown to be correlated with higher inflation rates and the complement regime is associated with high market returns and stock market risk premia. Since the state-space methodology procedure can be employed using only past data, forecasts of the persistent unobserved underlying state of the gold price appreciation regime will be augmented as more data becomes available.  相似文献   

19.
The interplay between the Bayesian and Frequentist approaches: a general nesting spatial panel-data model. Spatial Economic Analysis. An econometric framework mixing the Frequentist and Bayesian approaches is proposed in order to estimate a general nesting spatial model. First, it avoids specific dependency structures between unobserved heterogeneity and regressors, which improves mixing properties of Markov chain Monte Carlo (MCMC) procedures in the presence of unobserved heterogeneity. Second, it allows model selection based on a strong statistical framework, characteristics that are not easily introduced using a Frequentist approach. We perform some simulation exercises, finding good performance of the properties of our approach, and apply the methodology to analyse the relation between productivity and public investment in the United States.  相似文献   

20.
Using subjective well-being estimations, this study analyzes whether compensating variations vary across space using a cross-sectional data set from Chile. To achieve this goal, it describes and compares two econometric ways of modelling unobserved spatial heterogeneity. Both approaches allow compensating variations to vary across spatial units by assuming some distribution a priori. One method assumes that the spatial heterogeneity can be represented by a discrete distribution (a group of regions that share the same coefficient) and the other that the preferences can be represented by a continuous distribution (each region has a different coefficient). The results show that focusing just on the average estimates of compensating variations, as the applied studies have done so far, masks useful local variation. More empirical studies are needed to assess the advantages and disadvantages of both econometric approaches and how their results compare across a wide range of conditions and samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号