首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although each statistical unit on which measurements are taken is unique, typically there is not enough information available to account totally for its uniqueness. Therefore, heterogeneity among units has to be limited by structural assumptions. One classical approach is to use random effects models, which assume that heterogeneity can be described by distributional assumptions. However, inference may depend on the assumed mixing distribution, and it is assumed that the random effects and the observed covariates are independent. An alternative considered here is fixed effect models, which let each unit has its own parameter. They are quite flexible but suffer from the large number of parameters. The structural assumption made here is that there are clusters of units that share the same effects. It is shown how clusters can be identified by tailored regularised estimators. Moreover, it is shown that the regularised estimates compete well with estimates for the random effects model, even if the latter is the data generating model. They dominate if clusters are present.  相似文献   

2.
This paper introduces large-T bias-corrected estimators for nonlinear panel data models with both time invariant and time varying heterogeneity. These models include systems of equations with limited dependent variables and unobserved individual effects, and sample selection models with unobserved individual effects. Our two-step approach first estimates the reduced form by fixed effects procedures to obtain estimates of the time varying heterogeneity underlying the endogeneity/selection bias. We then estimate the primary equation by fixed effects including an appropriately constructed control variable from the reduced form estimates as an additional explanatory variable. The fixed effects approach in this second step captures the time invariant heterogeneity while the control variable accounts for the time varying heterogeneity. Since either or both steps might employ nonlinear fixed effects procedures it is necessary to bias adjust the estimates due to the incidental parameters problem. This problem is exacerbated by the two-step nature of the procedure. As these two-step approaches are not covered in the existing literature we derive the appropriate correction thereby extending the use of large-T bias adjustments to an important class of models. Simulation evidence indicates our approach works well in finite samples and an empirical example illustrates the applicability of our estimator.  相似文献   

3.
This survey reviews the existing literature on the most relevant Bayesian inference methods for univariate and multivariate GARCH models. The advantages and drawbacks of each procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures. The paper makes emphasis on recent Bayesian non‐parametric approaches for GARCH models that avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly assume infinite mixture of Gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey presents an illustration using real data to show the flexibility and usefulness of the non‐parametric approach.  相似文献   

4.
In this paper we consider estimation of nonlinear panel data models that include multiple individual fixed effects. Estimation of these models is complicated both by the difficulty of estimating models with possibly thousands of coefficients and also by the incidental parameters problem; that is, noisy estimates of the fixed effects when the time dimension is short contaminate the estimates of the common parameters due to the nonlinearity of the problem. We propose a simple variation of existing bias‐corrected estimators, which can exploit the additivity of the effects for numerical optimization. We exhibit the performance of the estimators in simulations.  相似文献   

5.
The mixed logit model is widely used in applied econometrics. Researchers typically rely on the free choice between the classical and Bayesian estimation approach. However, empirical evidence of the similarity of their parameter estimates is sparse. The presumed similarity is mainly based on one empirical study that analyzes a single dataset (Huber J, Train KE. 2001. On the similarity of classical and Bayesian estimates of individual mean partworths. Marketing Letters 12 (3): 259–269). Our replication study offers a generalization of their results by comparing classical and Bayesian parameter estimates from six additional datasets and specifically for panel versus cross‐sectional data. In general, our results suggest that the two methods provide similar results, with less similarity for cross‐sectional data than for panel data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
This paper uses free-knot and fixed-knot regression splines in a Bayesian context to develop methods for the nonparametric estimation of functions subject to shape constraints in models with log-concave likelihood functions. The shape constraints we consider include monotonicity, convexity and functions with a single minimum. A computationally efficient MCMC sampling algorithm is developed that converges faster than previous methods for non-Gaussian models. Simulation results indicate the monotonically constrained function estimates have good small sample properties relative to (i) unconstrained function estimates, and (ii) function estimates obtained from other constrained estimation methods when such methods exist. Also, asymptotic results show the methodology provides consistent estimates for a large class of smooth functions. Two detailed illustrations exemplify the ideas.  相似文献   

7.
This paper reports empirical evidence on the sensitivity of unemployment duration regression estimates to distributional assumptions and to time aggregation. The results indicate that parameter estimates are robust to distributional assumptions, while estimates of duration dependence are not. Time aggregation does not seem to have drastic effects on the estimates in a simple parametric model like the Weibull, but can produce dramatic changes in the more complicated extended generalized gamma model. Semiparametric models for grouped data produce stable estimates, and perform much better than continuous-time models in terms of significance at high levels of time aggregation.  相似文献   

8.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

9.
For a large heterogeneous group of patients, we analyse probabilities of hospital admission and distributional properties of lengths of hospital stay conditional on individual determinants. Bayesian structured additive regression models for zero‐inflated and overdispersed count data are employed. In addition, the framework is extended towards hurdle specifications, providing an alternative approach to cover particularly large frequencies of zero quotes in count data. As a specific merit, the model class considered embeds linear and nonlinear effects of covariates on all distribution parameters. Linear effects indicate that the quantity and severity of prior illness are positively correlated with the risk of hospital admission, while medical prevention (in the form of general practice visits) and rehabilitation reduce the expected length of future hospital stays. Flexible nonlinear response patterns are diagnosed for age and an indicator of a patients' socioeconomic status. We find that social deprivation exhibits a positive impact on the risk of admission and a negative effect on the expected length of future hospital stays of admitted patients. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large‐scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two‐step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke ( 1996 ). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the way to using large‐scale reduced rank models for empirical analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
Many papers have regressed non-parametric estimates of productive efficiency on environmental variables in two-stage procedures to account for exogenous factors that might affect firms’ performance. None of these have described a coherent data-generating process (DGP). Moreover, conventional approaches to inference employed in these papers are invalid due to complicated, unknown serial correlation among the estimated efficiencies. We first describe a sensible DGP for such models. We propose single and double bootstrap procedures; both permit valid inference, and the double bootstrap procedure improves statistical efficiency in the second-stage regression. We examine the statistical performance of our estimators using Monte Carlo experiments.  相似文献   

12.
Sequential maximum likelihood and GMM estimators of distributional parameters obtained from the standardised innovations of multivariate conditionally heteroskedastic dynamic regression models evaluated at Gaussian PML estimators preserve the consistency of mean and variance parameters while allowing for realistic distributions. We assess their efficiency, and obtain moment conditions leading to sequential estimators as efficient as their joint ML counterparts. We also obtain standard errors for VaR and CoVaR, and analyse the effects on these measures of distributional misspecification. Finally, we illustrate the small sample performance of these procedures through simulations and apply them to analyse the risk of large eurozone banks.  相似文献   

13.
Examining differences across school district boundaries rather than school attendance zone boundaries has several advantages. These advantages include being applicable when attendance zones are not available or less relevant to educational outcomes as arises with within district school choice and for examining the effect of factors like school spending or property taxes that do not vary within districts. However, school district boundaries have often been in place for many years allowing households to sort based on school quality and potentially creating distinct neighborhoods on either side of boundaries. We estimate models of housing prices using repeated cross-sections of housing transactions near school district boundaries in Connecticut. These models exploit changes over time to control for across boundary differences in neighborhood quality. We find significant effects of test scores on property values, but those effects are notably smaller than both OLS and traditional boundary fixed effects estimates.  相似文献   

14.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

15.
This article investigates whether a retailer’s store brand supply source impacts vertical pricing and supply channel profitability. Using chain‐level retail scanner data, a random coefficients logit demand model is estimated employing a Bayesian estimation approach. Supply models are specified conditional on demand parameter estimates. Bayesian decision theory is applied to select the best fitting pricing model. Results indicate that a vertically integrated retailer engages in linear pricing for brand manufacturers’ products while competing retailers make nonlinear pricing contracts with brand manufacturers for branded products and store brands. A simulated vertical divestiture based on real world events provides evidence for improved channel efficiency.  相似文献   

16.
Model specification for state space models is a difficult task as one has to decide which components to include in the model and to specify whether these components are fixed or time-varying. To this aim a new model space MCMC method is developed in this paper. It is based on extending the Bayesian variable selection approach which is usually applied to variable selection in regression models to state space models. For non-Gaussian state space models stochastic model search MCMC makes use of auxiliary mixture sampling. We focus on structural time series models including seasonal components, trend or intervention. The method is applied to various well-known time series.  相似文献   

17.
We introduce a class of instrumental quantile regression methods for heterogeneous treatment effect models and simultaneous equations models with nonadditive errors and offer computable methods for estimation and inference. These methods can be used to evaluate the impact of endogenous variables or treatments on the entire distribution of outcomes. We describe an estimator of the instrumental variable quantile regression process and the set of inference procedures derived from it. We focus our discussion of inference on tests of distributional equality, constancy of effects, conditional dominance, and exogeneity. We apply the procedures to characterize the returns to schooling in the U.S.  相似文献   

18.
This paper describes Bayesian techniques for analysing the effects of aggregate shocks on macroeconomic time-series. Rather than calculate point estimates of the response of a time-series to an aggregate shock, we calculate the whole probability density function of the response and use Monte-Carlo or Gibbs sampling techniques to evaluate its properties. The proposed techniques impose identification restrictions in a way that includes the uncertainty in these restrictions, and thus are an improvement over traditional approaches that typically use least-squares techniques supplemented by bootstrapping. We apply these techniques in the context of two different models. A key finding is that measures of uncertainty, such as posterior standard deviations, are much larger than are their classical counterparts.  相似文献   

19.
In the context of ranking infinite utility streams, the impartiality axiom of finite length anonymity requires the equal ranking of any two utility streams that are equal up to a finite length permutation ( Fleurbaey and Michel, 2003). We first characterize any finite length permutation as a composition of a fixed step permutation and an “almost” fixed step permutation. We then show that if a binary relation satisfies finite length anonymity, then it violates all the distributional axioms that are based on a segment-wise comparison. Examples of those axioms include the weak Pareto principle and the weak Pigou-Dalton principle.  相似文献   

20.
We consider how to estimate the trend and cycle of a time series, such as real gross domestic product, given a large information set. Our approach makes use of the Beveridge–Nelson decomposition based on a vector autoregression, but with two practical considerations. First, we show how to determine which conditioning variables span the relevant information by directly accounting for the Beveridge–Nelson trend and cycle in terms of contributions from different forecast errors. Second, we employ Bayesian shrinkage to avoid overfitting in finite samples when estimating models that are large enough to include many possible sources of information. An empirical application with up to 138 variables covering various aspects of the US economy reveals that the unemployment rate, inflation, and, to a lesser extent, housing starts, aggregate consumption, stock prices, real money balances, and the federal funds rate contain relevant information beyond that in output growth for estimating the output gap, with estimates largely robust to substituting some of these variables or incorporating additional variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号