首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
Under a quantile restriction, randomly censored regression models can be written in terms of conditional moment inequalities. We study the identified features of these moment inequalities with respect to the regression parameters where we allow for covariate dependent censoring, endogenous censoring and endogenous regressors. These inequalities restrict the parameters to a set. We show regular point identification can be achieved under a set of interpretable sufficient conditions. We then provide a simple way to convert conditional moment inequalities into unconditional ones while preserving the informational content. Our method obviates the need for nonparametric estimation, which would require the selection of smoothing parameters and trimming procedures. Without the point identification conditions, our objective function can be used to do inference on the partially identified parameter. Maintaining the point identification conditions, we propose a quantile minimum distance estimator which converges at the parametric rate to the parameter vector of interest, and has an asymptotically normal distribution. A small scale simulation study and an application using drug relapse data demonstrate satisfactory finite sample performance.  相似文献   

2.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

3.
We consider efficient estimation in moment conditions models with non‐monotonically missing‐at‐random (MAR) variables. A version of MAR point‐identifies the parameters of interest and gives a closed‐form efficient influence function that can be used directly to obtain efficient semi‐parametric generalized method of moments (GMM) estimators under standard regularity conditions. A small‐scale Monte Carlo experiment with MAR instrumental variables demonstrates that the asymptotic superiority of these estimators over the standard methods carries over to finite samples. An illustrative empirical study of the relationship between a child's years of schooling and number of siblings indicates that these GMM estimators can generate results with substantive differences from standard methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
We consider questions of efficiency and redundancy in the GMM estimation problem in which we have two sets of moment conditions, where two sets of parameters enter into one set of moment conditions, while only one set of parameters enters into the other. We then apply these results to a selectivity problem in which the first set of moment conditions is for the model of interest, and the second set of moment conditions is for the selection process. We use these results to explain the counterintuitive result in the literature that, under an ignorability assumption that justifies GMM with weighted moment conditions, weighting using estimated probabilities of selection is better than weighting using the true probabilities. We also consider estimation under an exogeneity of selection assumption such that both the unweighted and the weighted moment conditions are valid, and we show that when weighting is not needed for consistency, it is also not useful for efficiency.  相似文献   

5.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

6.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

7.
Simultaneous confidence bands are versatile tools for visualizing estimation uncertainty for parameter vectors, such as impulse response functions. In linear models, it is known that that the sup‐t confidence band is narrower than commonly used alternatives—for example, Bonferroni and projection bands. We show that the same ranking applies asymptotically even in general nonlinear models, such as vector autoregressions (VARs). Moreover, we provide further justification for the sup‐t band by showing that it is the optimal default choice when the researcher does not know the audience's preferences. Complementing existing plug‐in and bootstrap implementations, we propose a computationally convenient Bayesian sup‐t band with exact finite‐sample simultaneous credibility. In an application to structural VAR impulse response function estimation, the sup‐t band—which has been surprisingly overlooked in this setting—is at least 35% narrower than other off‐the‐shelf simultaneous bands.  相似文献   

8.
9.
This paper studies the efficient estimation of large‐dimensional factor models with both time and cross‐sectional dependence assuming (N,T) separability of the covariance matrix. The asymptotic distribution of the estimator of the factor and factor‐loading space under factor stationarity is derived and compared to that of the principal component (PC) estimator. The paper also considers the case when factors exhibit a unit root. We provide feasible estimators and show in a simulation study that they are more efficient than the PC estimator in finite samples. In application, the estimation procedure is employed to estimate the Lee–Carter model and life expectancy is forecast. The Dutch gender gap is explored and the relationship between life expectancy and the level of economic development is examined in a cross‐country comparison.  相似文献   

10.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

11.
We propose a new dynamic copula model in which the parameter characterizing dependence follows an autoregressive process. As this model class includes the Gaussian copula with stochastic correlation process, it can be viewed as a generalization of multivariate stochastic volatility models. Despite the complexity of the model, the decoupling of marginals and dependence parameters facilitates estimation. We propose estimation in two steps, where first the parameters of the marginal distributions are estimated, and then those of the copula. Parameters of the latent processes (volatilities and dependence) are estimated using efficient importance sampling. We discuss goodness‐of‐fit tests and ways to forecast the dependence parameter. For two bivariate stock index series, we show that the proposed model outperforms standard competing models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Growing-dimensional data with likelihood function unavailable are often encountered in various fields. This paper presents a penalized exponentially tilted (PET) likelihood for variable selection and parameter estimation for growing dimensional unconditional moment models in the presence of correlation among variables and model misspecification. Under some regularity conditions, we investigate the consistent and oracle properties of the PET estimators of parameters, and show that the constrained PET likelihood ratio statistic for testing contrast hypothesis asymptotically follows the chi-squared distribution. Theoretical results reveal that the PET likelihood approach is robust to model misspecification. We study high-order asymptotic properties of the proposed PET estimators. Simulation studies are conducted to investigate the finite performance of the proposed methodologies. An example from the Boston Housing Study is illustrated.  相似文献   

13.
We propose non-nested hypothesis tests for conditional moment restriction models based on the method of generalized empirical likelihood (GEL). By utilizing the implied GEL probabilities from a sequence of unconditional moment restrictions that contains equivalent information of the conditional moment restrictions, we construct Kolmogorov–Smirnov and Cramér–von Mises type moment encompassing tests. Advantages of our tests over Otsu and Whang’s (2011) tests are: (i) they are free from smoothing parameters, (ii) they can be applied to weakly dependent data, and (iii) they allow non-smooth moment functions. We derive the null distributions, validity of a bootstrap procedure, and local and global power properties of our tests. The simulation results show that our tests have reasonable size and power performance in finite samples.  相似文献   

14.
In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose an integrated conditional moment type predictive accuracy test that is similar in spirit to that developed by Bierens (J. Econometr. 20 (1982) 105; Econometrica 58 (1990) 1443) and Bierens and Ploberger (Econometrica 65 (1997) 1129). The test is consistent against generic nonlinear alternatives, and is designed for comparing nested models. One important feature of our approach is that the same loss function is used for in-sample estimation and out-of-sample prediction. In this way, we rule out the possibility that the null model can outperform the nesting generic alternative model. It turns out that the limiting distribution of the ICM type test statistic that we propose is a functional of a Gaussian process with a covariance kernel that reflects both the time series structure of the data as well as the contribution of parameter estimation error. As a consequence, critical values that are data dependent and cannot be directly tabulated. One approach in this case is to obtain critical value upper bounds using the approach of Bierens and Ploberger (Econometrica 65 (1997) 1129). Here, we establish the validity of a conditional p-value method for constructing critical values. The method is similar in spirit to that proposed by Hansen (Econometrica 64 (1996) 413) and Inoue (Econometric Theory 17 (2001) 156), although we additionally account for parameter estimation error. In a series of Monte Carlo experiments, the finite sample properties of three variants of the predictive accuracy test are examined. Our findings suggest that all three variants of the test have good finite sample properties when quadratic loss is specified, even for samples as small as 600 observations. However, non-quadratic loss functions such as linex loss require larger sample sizes (of 1000 observations or more) in order to ensure reasonable finite sample performance.  相似文献   

15.
Abstract

This paper develops a unified framework for fixed effects (FE) and random effects (RE) estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroscedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both an RE and an FE spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroscedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman test of the spatial random against the spatial FE model.  相似文献   

16.
This paper considers spatial heteroskedasticity and autocorrelation consistent (spatial HAC) estimation of covariance matrices of parameter estimators. We generalize the spatial HAC estimator introduced by Kelejian and Prucha (2007) to apply to linear and nonlinear spatial models with moment conditions. We establish its consistency, rate of convergence and asymptotic truncated mean squared error (MSE). Based on the asymptotic truncated MSE criterion, we derive the optimal bandwidth parameter and suggest its data dependent estimation procedure using a parametric plug-in method. The finite sample performances of the spatial HAC estimator are evaluated via Monte Carlo simulation.  相似文献   

17.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
We derive computationally simple expressions for score tests of misspecification in parametric dynamic factor models using frequency domain techniques. We interpret those diagnostics as time domain moment tests which assess whether certain autocovariances of the smoothed latent variables match their theoretical values under the null of correct model specification. We also reinterpret reduced‐form residual tests as checking specific restrictions on structural parameters. Our Gaussian tests are robust to nonnormal, independent innovations. Monte Carlo exercises confirm the finite‐sample reliability and power of our proposals. Finally, we illustrate their empirical usefulness in an application that constructs a US coincident indicator.  相似文献   

19.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

20.
Some recent specifications for GARCH error processes explicitly assume a conditional variance that is generated by a mixture of normal components, albeit with some parameter restrictions. This paper analyses the general normal mixture GARCH(1,1) model which can capture time variation in both conditional skewness and kurtosis. A main focus of the paper is to provide evidence that, for modelling exchange rates, generalized two‐component normal mixture GARCH(1,1) models perform better than those with three or more components, and better than symmetric and skewed Student's t‐GARCH models. In addition to the extensive empirical results based on simulation and on historical data on three US dollar foreign exchange rates (British pound, euro and Japanese yen), we derive: expressions for the conditional and unconditional moments of all models; parameter conditions to ensure that the second and fourth conditional and unconditional moments are positive and finite; and analytic derivatives for the maximum likelihood estimation of the model parameters and standard errors of the estimates. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号