首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we analyze the omitted variable bias problem in the multinomial logistic probability model. Sufficient, as well as necessary, conditions under which the omitted variable will not create asymptotically biased coefficient estimates for the included variables are derived. Conditional on the response variable, if the omitted explanatory and the included explanatory variable are independent, the bias will not occur. Bias will occur if the omitted relevant variable is independent with the included explanatory variable. The coefficient of the included variable plays an important role in the direction of the bias.  相似文献   

2.
Abstract

The purpose of this paper is twofold. First, we provide a discussion of the problems associated with endogeneity in empirical accounting research. We emphasize problems arising when endogeneity is caused by (1) unobservable firm-specific factors and (2) omitted variables, and discuss the merits and drawbacks of using panel data techniques to address these causes. Second, we investigate the magnitude of endogeneity bias in Ordinary Least Squares (OLS) regressions of cost-of-debt capital on firm disclosure policy. We document how including a set of variables which theory suggests to be related with both cost-of-debt capital and disclosure and using fixed effects estimation in a panel data-set reduces the endogeneity bias and produces consistent results. This analysis reveals that the effect of disclosure policy on cost-of-debt capital is 200% higher than what is found in OLS estimation. Finally, we provide direct evidence that disclosure is impacted by unobservable firm-specific factors that are also correlated with cost of capital.  相似文献   

3.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

4.
Most rational expectations models involve equations in which the dependent variable is a function of its lags and its expected future value. We investigate the asymptotic bias of generalized method of moment (GMM) and maximum likelihood (ML) estimators in such models under misspecification. We consider several misspecifications, and focus more specifically on the case of omitted dynamics in the dependent variable. In a stylized DGP, we derive analytically the asymptotic biases of these estimators. We establish that in many cases of interest the two estimators of the degree of forward-lookingness are asymptotically biased in opposite direction with respect to the true value of the parameter. We also propose a quasi-Hausman test of misspecification based on the difference between the GMM and ML estimators. Using Monte-Carlo simulations, we show that the ordering and direction of the estimators still hold in a more realistic New Keynesian macroeconomic model. In this set-up, misspecification is in general found to be more harmful to GMM than to ML estimators.  相似文献   

5.
This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is not identified. Inferential problems caused by data errors have been conceptualized through convolution and mixture models. This paper introduces the direct misclassification approach. The approach is based on the observation that in the presence of classification errors, the relation between the distribution of the ‘true’ but unobservable variable and its misclassified representation is given by a linear system of simultaneous equations, in which the coefficient matrix is the matrix of misclassification probabilities. Formalizing the problem in these terms allows one to incorporate any prior information into the analysis through sets of restrictions on the matrix of misclassification probabilities. Such information can have strong identifying power. The direct misclassification approach fully exploits it to derive identification regions for any real functional of the distribution of interest. A method for estimating the identification regions and construct their confidence sets is given, and illustrated with an empirical analysis of the distribution of pension plan types using data from the Health and Retirement Study.  相似文献   

6.
This paper introduces large-T bias-corrected estimators for nonlinear panel data models with both time invariant and time varying heterogeneity. These models include systems of equations with limited dependent variables and unobserved individual effects, and sample selection models with unobserved individual effects. Our two-step approach first estimates the reduced form by fixed effects procedures to obtain estimates of the time varying heterogeneity underlying the endogeneity/selection bias. We then estimate the primary equation by fixed effects including an appropriately constructed control variable from the reduced form estimates as an additional explanatory variable. The fixed effects approach in this second step captures the time invariant heterogeneity while the control variable accounts for the time varying heterogeneity. Since either or both steps might employ nonlinear fixed effects procedures it is necessary to bias adjust the estimates due to the incidental parameters problem. This problem is exacerbated by the two-step nature of the procedure. As these two-step approaches are not covered in the existing literature we derive the appropriate correction thereby extending the use of large-T bias adjustments to an important class of models. Simulation evidence indicates our approach works well in finite samples and an empirical example illustrates the applicability of our estimator.  相似文献   

7.
We consider the problem of estimating and decomposing wage differentials in the presence of unobserved worker, firm, and match heterogeneity. Controlling for these unobservables corrects omitted variable bias in previous studies. It also allows us to measure the contribution of unmeasured characteristics of workers, firms, and worker-firm matches to observed wage differentials. An application to linked employer-employee data shows that decompositions of inter-industry earnings differentials and the male-female differential are misleading when unobserved heterogeneity is ignored.  相似文献   

8.
Social networks are increasingly being recognized as having an important influence on labour market outcomes, since they facilitate the exchange of job related information. Access to information about job opportunities as well as perceptions about the buoyancy of the labour market depend critically on the social structures and the social networks to which labour market participants belong. In this paper, we examine the impact of information externalities generated through network membership on labour market status. Using Census data from South Africa, a country characterized by high levels of unemployment and worker discouragement, we adopt an econometric approach that aims to minimise the problems of omitted variable bias that have plagued many previous studies of the impact of social networks. Our results suggest that social networks may enhance employment probabilities by an additional 3–12%, and that failure to adequately control for omitted variables would lead to substantial over-estimates of the network co-efficient. In contrast, the impact of social networks on reducing worker discouragement is much smaller, at between 1 and 2%.  相似文献   

9.
Many studies have argued against the strict form of the efficient markets hypothesis (EMH) by concluding that a lagged relationship exists between volume and the absolute value of a price change. These studies have denied a priori the possibility of a contemporaneous relationship. If a simultaneous relationship exists then least squares with only lagged variables suffers from omitted variable bias, and least squares with contemporaneous variables may suffer from simultaneous equations bias. Investigating these possibilities, this study demonstrates that simultaneity exists and that previous findings of a lagged relationship between the variables are therefore due to specification error. System estimation techniques suggest that the price-volume relationship is recursive, with the absolute value of a price change causing volume contemporaneously, but not conversely.  相似文献   

10.
Conclusions In this paper we have proposed new techniques for simplifying the estimation of disequilibrium models by avoiding constrained maximum likelihood methods (which cannot avoid numerous theoretical and practical difficulties mentioned above) including an unrealistic assumption of the independence of errors in demand and supply system of equations. In the proposed first stage, one estimates the relative magnitude of the residuals from the demand and supply equations nonparametrically, even though they suffer from omitted variables bias, because the coefficient of the omitted variable is known to be the same in both equations. The reason for using nonparametric methods is that they do not depend on parametric functional forms of biased (bent inward) demand and supply equations. The first stage compares the absolute values of residuals from conditional expectations in order to classify the data points as belonging to the demand or the supply curve. We estimate the economically meaningful scale elasticity and distribution parameters at the second stage from classified (separated) data.We extend nonparametric kernel estimation to the r = 4 case to improve the speed of convergence, as predicted by Singh's [1981] theory. In the first stage, r = 4 results give generally improved R2 and ¦t¦ values in our study of the Dutch data—used by many authors concerned with the estimation of floorspace productivity. We find that one can obtain reasonable results by our approximate but simpler two stage methods. Detailed results are reported for four types of Dutch retail establishments. More research is needed to gain further experience and to extend the methodology to other disequilibrium models and other productivity estimation problems.This paper was processed by W. Eichhorn.  相似文献   

11.
Abstract This paper presents a meta‐analysis of prospective cohort (longitudinal) studies of alcohol marketing and adolescent drinking, which accounts for publication bias. The paper provides a summary of 12 primary studies of the marketing–drinking relationship. Each primary study surveyed a sample of youth to determine baseline drinking status and marketing exposure, and re‐surveyed the youth to determine subsequent drinking outcomes. Logistic analyses provide estimates of the odds ratio for effects of baseline marketing variables on adolescent drinking at follow‐up. Using meta‐regression analysis, two samples are examined in this paper: 23 effect‐size estimates for drinking onset (initiation); and 40 estimates for other drinking behaviours (frequency, amount, bingeing). Marketing variables include ads in mass media, promotion portrayals, brand recognition and subjective evaluations by survey respondents. Publication bias is assessed using funnel plots that account for ‘missing’ studies, bivariate regressions and multivariate meta‐regressions that account for primary study heterogeneity, heteroskedasticity, data dependencies, publication bias and truncated samples. The empirical results are consistent with publication bias, omitted variable bias in some studies, and lack of a genuine effect, especially for mass media. The paper also discusses ‘dissemination bias’ in the use of research results by primary investigators and health policy interest groups.  相似文献   

12.
We discuss empirical challenges in multicountry studies of the effects of firm-level corporate governance on firm value, focusing on emerging markets. We assess the severe data, “construct validity”, and endogeneity issues in these studies, propose methods to respond to those issues, and apply those methods to a study of five major emerging markets—Brazil, India, Korea, Russia, and Turkey. We develop unique time-series datasets on governance in each country. We address construct validity by building country-specific indices which reflect local norms and institutions. These similar-but-not-identical indices predict firm market value in each country, and when pooled across countries, in firm fixed-effects (FE) and random-effects (RE) regressions. In contrast, a “common index”, which uses the same elements in each country, has no predictive power in FE regressions. For the country-specific and pooled indices, FE and RE coefficients on governance are generally lower than in pooled OLS regressions, and coefficients with extensive covariates are generally lower than with limited covariates. These results confirm the value of using FE or RE with extensive covariates to reduce omitted variable bias. We develop lower bounds on our estimates which reflect potential remaining omitted variable bias.  相似文献   

13.
A common strategy within the framework of regression models is the selection of variables with possible predictive value, which are incorporated in the regression model. Two recently proposed methods, Breiman's Garotte (B reiman , 1995) and Tibshirani's Lasso (T ibshirani , 1996) try to combine variable selection and shrinkage. We compare these with pure variable selection and shrinkage procedures. We consider the backward elimination procedure as a typical variable selection procedure and as an example of a shrinkage procedure an approach of V an H ouwelingen and L e C essie (1990). Additionally an extension of van Houwelingens and le Cessies approach proposed by S auerbrei (1999) is considered. The ordinary least squares method is used as a reference.
With the help of a simulation study we compare these approaches with respect to the distribution of the complexity of the selected model, the distribution of the shrinkage factors, selection bias, the bias and variance of the effect estimates and the average prediction error.  相似文献   

14.
This paper investigates the determinants of credit spread changes in euro‐denominated bonds. We adopt a factor model framework, inspired by the credit risk structural approach, as credit spread changes can be easily viewed as an excess return on corporate bonds over Treasury bonds. We try to assess the relative importance of market and idiosyncratic factors as an explanation of movements in credit spreads. We adopt a heterogeneous panel with a multifactor error model and propose a two‐step estimation procedure, which yields consistent estimates of unobserved factors. The analysis is carried out with a panel of monthly redemption yields on a set of corporate bonds for a time span of 3 years. Our results suggest that the euro corporate market is driven by observable and unobservable factors. The unobservable factors are identified through a consistent estimation of individual and common observable effects. The empirical results suggest that an unobserved common factor has a significant role in explaining the systematic changes in credit spreads. However, in contrast to evidence regarding US credit spread changes, it cannot be identified as a market factor. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
A recent article by Krause (Qual Quant, doi:10.1007/s11135-012-9712-5, Krause (2012)) maintains that: (1) it is untenable to characterize the error term in multiple regression as simply an extraneous random influence on the outcome variable, because any amount of error implies the possibility of one or more omitted, relevant explanatory variables; and (2) the only way to guarantee the prevention of omitted variable bias and thereby justify causal interpretations of estimated coefficients is to construct fully specified models that completely eliminate the error term. The present commentary argues that such an extreme position is impractical and unnecessary, given the availability of specialized techniques for dealing with the primary statistical consequence of omitted variables, namely endogeneity, or the existence of correlations between included explanatory variables and the error term. In particular, the current article discusses the method of instrumental variable estimation, which can resolve the endogeneity problem in causal models where one or more relevant explanatory variables are excluded, thus allowing for accurate estimation of effects. An overview of recent methodological resources and software for conducting instrumental variables estimation is provided, with the aim of helping to place this crucial technique squarely in the statistical toolkit of applied researchers.  相似文献   

16.
Recent literature on panel data emphasizes the importance of accounting for time-varying unobservable individual effects, which may stem from either omitted individual characteristics or macro-level shocks that affect each individual unit differently. In this paper, we propose a simple specification test of the null hypothesis that the individual effects are time-invariant against the alternative that they are time-varying. Our test is an application of Hausman (1978) testing procedure and can be used for any generalized linear model for panel data that admits a sufficient statistic for the individual effect. This is a wide class of models which includes the Gaussian linear model and a variety of nonlinear models typically employed for discrete or categorical outcomes. The basic idea of the test is to compare two alternative estimators of the model parameters based on two different formulations of the conditional maximum likelihood method. Our approach does not require assumptions on the distribution of unobserved heterogeneity, nor it requires the latter to be independent of the regressors in the model. We investigate the finite sample properties of the test through a set of Monte Carlo experiments. Our results show that the test performs well, with small size distortions and good power properties. We use a health economics example based on data from the Health and Retirement Study to illustrate the proposed test.  相似文献   

17.
Market liquidity as dynamic factors   总被引:1,自引:0,他引:1  
We use recent results on the Generalized Dynamic Factor Model (GDFM) with block structure to provide a data-driven definition of unobservable market liquidity and to assess the complementarity of two observed liquidity measures: daily close relative spreads and daily traded volumes for a sample of 426 S&P500 constituents recorded over the years 2004-2006. The advantage of defining market liquidity as a dynamic factor is that, contrary to other definitions, it tackles time dependence and commonness at the same time, without making any restrictive assumptions. Both relative spread and volume in the dataset under study appear to be driven by the same one-dimensional common shocks, which therefore naturally qualify as the unobservable market liquidity shocks.  相似文献   

18.
ABSTRACT

A major challenge in the analysis of micro-level spatial interaction is to distinguish actual interactions from the effects of spatially correlated omitted variables. We propose extending the simple spatially lagged explanatory (SLX) model to include two spatial weighting matrices at different spatial scales to reduce omitted-variable bias. The approach is suitable when actual interaction takes place on a smaller local level, while the omitted variables are spatially correlated at a larger regional level and correlated with the included characteristics. We provide an empirical motivation and use Monte Carlo simulation to illustrate the bias-reduction effects in certain settings.  相似文献   

19.
The implicit market approach of Rosen is combined with a dynamic model of producer behavior to determine the conditions under which reducing the structure tax rate leads housing quality to rise. Specifically, it is found that, in the stationary state, quality will rise if capital is a noninferior input. Since it is argued that quality cannot be accurately measured, this conclusion is then tested with an unobservable variables model where quality is the unobservable. A comparison between the responses of quality and housing quantity to reductions in the tax rate is made.  相似文献   

20.
This article measureseconomic returns to research investment in Chinese agricultureusing the production function approach. A stock-of-knowledgevariable constructed from the past research investment is directlyincluded in the production function as an explanatory variablein the production function. Improved rural infrastructure, irrigation,and education are also included as explanatory variables to avoidthe upward bias in the estimates of returns to agricultural research.A two-way variable coefficients technique is used in the estimationto reduce estimation biases due to the remaining measurementand omitted variables problems. Sensitivity analyses are conductedto test the effects of various lag structures on the return estimates.The results show that rates of return to research investmentin Chinese agriculture are high, ranging from 36% to 90% in1997, and the rates are increasing over time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号