首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A smoothed least squares estimator for threshold regression models   总被引:1,自引:0,他引:1  
We propose a smoothed least squares estimator of the parameters of a threshold regression model. Our model generalizes that considered in Hansen [2000. Sample splitting and threshold estimation. Econometrica 68, 575–603] to allow the thresholding to depend on a linear index of observed regressors, thus allowing discrete variables to enter. We also do not assume that the threshold effect is vanishingly small. Our estimator is shown to be consistent and asymptotically normal thus facilitating standard inference techniques based on estimated standard errors or standard bootstrap for the slope and threshold parameters.  相似文献   

2.
In this paper we derive a semiparametric efficient adaptive estimator of an asymmetric GARCH model. Applying some general results from Drost et al. [1997. The Annals of Statistics 25, 786–818], we first estimate the unknown density function of the disturbances by kernel methods, then apply a one-step Newton–Raphson method to obtain a more efficient estimator than the quasi-maximum likelihood estimator. The proposed semiparametric estimator is adaptive for parameters appearing in the conditional standard deviation model with respect to the unknown distribution of the disturbances.  相似文献   

3.
In this paper, we analytically investigate three efficient estimators for cointegrating regression models: Phillips and Hansen’s [Phillips, P.C.B., Hansen, B.E., 1990. Statistical inference in instrumental variables regression with I(1) processes. Review of Economic Studies 57, 99–125] fully modified OLS estimator, Park’s [Park, J.Y., 1992. Canonical cointegrating regressions. Econometrica 60, 119–143] canonical cointegrating regression estimator, and Saikkonen’s [Saikkonen, P., 1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21] dynamic OLS estimator. We consider the case where the regression errors are moderately serially correlated and the AR coefficient in the regression errors approaches 1 at a rate slower than 1/T1/T, where TT represents the sample size. We derive the limiting distributions of the efficient estimators under this system and find that they depend on the approaching rate of the AR coefficient. If the rate is slow enough, efficiency is established for the three estimators; however, if the approaching rate is relatively faster, the estimators will have the same limiting distribution as the OLS estimator. For the intermediate case, the second-order bias of the OLS estimator is partially eliminated by the efficient methods. This result explains why, in finite samples, the effect of the efficient methods diminishes as the serial correlation in the regression errors becomes stronger. We also propose to modify the existing efficient estimators in order to eliminate the second-order bias, which possibly remains in the efficient estimators. Using Monte Carlo simulations, we demonstrate that our modification is effective when the regression errors are moderately serially correlated and the simultaneous correlation is relatively strong.  相似文献   

4.
State price densities (SPDs) are an important element in applied quantitative finance. In a Black–Scholes world they are lognormal distributions, but in practice volatility changes and the distribution deviates from log-normality. In order to study the degree of this deviation, we estimate SPDs using EUREX option data on the DAX index via a nonparametric estimator of the second derivative of the (European) call pricing function. The estimator is constrained so as to satisfy no-arbitrage constraints and corrects for the intraday covariance structure in option prices. In contrast to existing methods, we do not use any parametric or smoothness assumptions.  相似文献   

5.
This paper focuses on the analysis of size distributions of innovations, which are known to be highly skewed. We use patent citations as one indicator of innovation significance, constructing two large datasets from the European and US Patent Offices at a high level of aggregation, and the Trajtenberg [1990, A penny for your quotes: patent citations and the value of innovations. Rand Journal of Economics 21(1), 172–187] dataset on CT scanners at a very low one. We also study self-assessed reports of patented innovation values using two very recent patent valuation datasets from the Netherlands and the UK, as well as a small dataset of patent licence revenues of Harvard University. Statistical methods are applied to analyse the properties of the empirical size distributions, where we put special emphasis on testing for the existence of ‘heavy tails’, i.e., whether or not the probability of very large innovations declines more slowly than exponentially. While overall the distributions appear to resemble a lognormal, we argue that the tails are indeed fat. We invoke some recent results from extreme value statistics and apply the Hill [1975. A simple general approach to inference about the tails of a distribution. The Annals of Statistics 3, 1163–1174] estimator with data-driven cut-offs to determine the tail index for the right tails of all datasets except the NL and UK patent valuations. On these latter datasets we use a maximum likelihood estimator for grouped data to estimate the tail index for varying definitions of the right tail. We find significantly and consistently lower tail estimates for the returns data than the citation data (around 0.6–1 vs. 3–5). The EPO and US patent citation tail indices are roughly constant over time, but the latter estimates are significantly lower than the former. The heaviness of the tails, particularly as measured by value indicators, we argue, has significant implications for technology policy and growth theory, since the second and possibly even the first moments of these distributions may not exist.  相似文献   

6.
We define a new procedure for consistent estimation of nonparametric simultaneous equations models under the conditional mean independence restriction of Newey et al. [1999. Nonparametric estimation of triangular simultaneous equation models. Econometrica 67, 565–603]. It is based upon local polynomial regression and marginal integration techniques. We establish the asymptotic distribution of our estimator under weak data dependence conditions. Simulation evidence suggests that our estimator may significantly outperform the estimators of Pinkse [2000. Nonparametric two-step regression estimation when regressors and errors are dependent. Canadian Journal of Statistics 28, 289–300] and Newey and Powell [2003. Instrumental variable estimation of nonparametric models. Econometrica 71, 1565–1578].  相似文献   

7.
We consider a semiparametric distributed lag model in which the “news impact curve” m is nonparametric but the response is dynamic through some linear filters. A special case of this is a nonparametric regression with serially correlated errors. We propose an estimator of the news impact curve based on a dynamic transformation that produces white noise errors. This yields an estimating equation for m that is a type two linear integral equation. We investigate both the stationary case and the case where the error has a unit root. In the stationary case we establish the pointwise asymptotic normality. In the special case of a nonparametric regression subject to time series errors our estimator achieves efficiency improvements over the usual estimators, see Xiao et al. [2003. More efficient local polynomial estimation in nonparametric regression with autocorrelated errors. Journal of the American Statistical Association 98, 980–992]. In the unit root case our procedure is consistent and asymptotically normal unlike the standard regression smoother. We also present the distribution theory for the parameter estimates, which is nonstandard in the unit root case. We also investigate its finite sample performance through simulation experiments.  相似文献   

8.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

9.
We present new Monte Carlo evidence regarding the feasibility of separating causality from selection within non-experimental duration data, by means of the non-parametric maximum likelihood estimator (NPMLE). Key findings are: (i) the NPMLE is extremely reliable, and it accurately separates the causal effects of treatment and duration dependence from sorting effects, almost regardless of the true unobserved heterogeneity distribution; (ii) the NPMLE is normally distributed, and standard errors can be computed directly from the optimally selected model; and (iii) unjustified restrictions on the heterogeneity distribution, e.g., in terms of a pre-specified number of support points, may cause substantial bias.  相似文献   

10.
In this paper, we consider bootstrapping cointegrating regressions. It is shown that the method of bootstrap, if properly implemented, generally yields consistent estimators and test statistics for cointegrating regressions. For the cointegrating regression models driven by general linear processes, we employ the sieve bootstrap based on the approximated finite-order vector autoregressions for the regression errors and the first differences of the regressors. In particular, we establish the bootstrap consistency for OLS method. The bootstrap method can thus be used to correct for the finite sample bias of the OLS estimator and to approximate the asymptotic critical values of the OLS-based test statistics in general cointegrating regressions. The bootstrap OLS procedure, however, is not efficient. For the efficient estimation and hypothesis testing, we consider the procedure proposed by Saikkonen [1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21] and Stock and Watson [1993. A simple estimator of cointegrating vectors in higher order integrating systems. Econometrica 61, 783–820] relying on the regression augmented with the leads and lags of differenced regressors. The bootstrap versions of their procedures are shown to be consistent, and can be used to do asymptotically valid inferences. A Monte Carlo study is conducted to investigate the finite sample performances of the proposed bootstrap methods.  相似文献   

11.
Model averaging by jackknife criterion in models with dependent data   总被引:1,自引:0,他引:1  
The past decade witnessed a literature on model averaging by frequentist methods. For the most part, the asymptotic optimality of various existing frequentist model averaging estimators has been established under i.i.d. errors. Recently, Hansen and Racine [Hansen, B.E., Racine, J., 2012. Jackknife model averaging. Journal of Econometrics 167, 38–46] developed a jackknife model averaging (JMA) estimator, which has an important advantage over its competitors in that it achieves the lowest possible asymptotic squared error under heteroscedastic errors. In this paper, we broaden Hansen and Racine’s scope of analysis to encompass models with (i) a non-diagonal error covariance structure, and (ii) lagged dependent variables, thus allowing for dependent data. We show that under these set-ups, the JMA estimator is asymptotically optimal by a criterion equivalent to that used by Hansen and Racine. A Monte Carlo study demonstrates the finite sample performance of the JMA estimator in a variety of model settings.  相似文献   

12.
We introduce a class of instrumental quantile regression methods for heterogeneous treatment effect models and simultaneous equations models with nonadditive errors and offer computable methods for estimation and inference. These methods can be used to evaluate the impact of endogenous variables or treatments on the entire distribution of outcomes. We describe an estimator of the instrumental variable quantile regression process and the set of inference procedures derived from it. We focus our discussion of inference on tests of distributional equality, constancy of effects, conditional dominance, and exogeneity. We apply the procedures to characterize the returns to schooling in the U.S.  相似文献   

13.
We examine the higher order properties of the wild bootstrap (Wu, 1986) in a linear regression model with stochastic regressors. We find that the ability of the wild bootstrap to provide a higher order refinement is contingent upon whether the errors are mean independent of the regressors or merely uncorrelated with them. In the latter case, the wild bootstrap may fail to match some of the terms in an Edgeworth expansion of the full sample test statistic. Nonetheless, we show that the wild bootstrap still has a lower maximal asymptotic risk as an estimator of the true distribution than a normal approximation, in shrinking neighborhoods of properly specified models. To assess the practical implications of this result we conduct a Monte Carlo study contrasting the performance of the wild bootstrap with a normal approximation and the traditional nonparametric bootstrap.  相似文献   

14.
This paper presents a new approximation to the exact sampling distribution of the instrumental variables estimator in simultaneous equations models. It differs from many of the approximations currently available, Edgeworth expansions for example, in that it is specifically designed to work well when the concentration parameter is small. The approximation is remarkable in that simultaneously: (i) it has an extremely simple final form; (ii) in situations for which it is designed it is typically much more accurate than is the large sample normal approximation; and (iii) it is able to capture most of those stylized facts that characterize lack of identification and weak instrument scenarios. The development leading to the approximation is also novel in that it introduces techniques of some independent interest not seen in this literature hitherto.  相似文献   

15.
Linear parabolic partial differential equations (PDE’s) and diffusion models are closely linked through the celebrated Feynman–Kac representation of solutions to PDE’s. In asset pricing theory, this leads to the representation of derivative prices as solutions to PDE’s. Very often implied derivative prices are calculated given preliminary estimates of the diffusion model for the underlying variable. We demonstrate that the implied derivative prices are consistent and derive their asymptotic distribution under general conditions. We apply this result to three leading cases of preliminary estimators: Nonparametric, semiparametric and fully parametric ones. In all three cases, the asymptotic distribution of the solution is derived. We demonstrate the use of these results in obtaining confidence bands and standard errors for implied prices of bonds, options and other derivatives. Our general results also are of interest for the estimation of diffusion models using either historical data of the underlying process or option prices; these issues are also discussed.  相似文献   

16.
In this paper, we derive efficiency bounds for the ordered response model when the distribution of the errors is unknown. Furthermore, we develop an estimator that is efficient under suitable conditions. Interestingly, neither the bounds nor the estimator are trivial extensions of what has been proposed in the literature for the binary response model. The estimator is composed of quadratic B-splines, and estimation is performed by the method of sieves. In addition, the estimator of the distribution function is restricted to be a proper distribution function. An empirical example on the effect of fees on attendance rates at universities and community colleges is also included; we get substantively different results by relaxing the assumption that the distribution of the errors is normal.  相似文献   

17.
A formal test on the Lyapunov exponent is developed to distinguish a random walk model from a chaotic system, which is based on the Nadaraya–Watson kernel estimator of the Lyapunov exponent. The asymptotic null distribution of our test statistic is free of nuisance parameter, and simply given by the range of standard Brownian motion on the unit interval. The test is consistent against the chaotic alternatives. A simulation study shows that the test performs reasonably well in finite samples. We apply our test to some of the standard macro and financial time series, finding no significant empirical evidence of chaos.  相似文献   

18.
19.
This paper proposes a new testing procedure for detecting error cross section dependence after estimating a linear dynamic panel data model with regressors using the generalised method of moments (GMM). The test is valid when the cross-sectional dimension of the panel is large relative to the time series dimension. Importantly, our approach allows one to examine whether any error cross section dependence remains after including time dummies (or after transforming the data in terms of deviations from time-specific averages), which will be the case under heterogeneous error cross section dependence. Finite sample simulation-based results suggest that our tests perform well, particularly the version based on the [Blundell, R., Bond, S., 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115–143] system GMM estimator. In addition, it is shown that the system GMM estimator, based only on partial instruments consisting of the regressors, can be a reliable alternative to the standard GMM estimators under heterogeneous error cross section dependence. The proposed tests are applied to employment equations using UK firm data and the results show little evidence of heterogeneous error cross section dependence.  相似文献   

20.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号