首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Journal of econometrics》2005,127(2):201-224
This paper discusses inference in self-exciting threshold autoregressive (SETAR) models. Of main interest is inference for the threshold parameter. It is well-known that the asymptotics of the corresponding estimator depend upon whether the SETAR model is continuous or not. In the continuous case, the limiting distribution is normal and standard inference is possible. In the discontinuous case, the limiting distribution is non-normal and it is not known how to estimate it consistently. We show that valid inference can be drawn by the use of the subsampling method. Moreover, the method can even be extended to situations where the (dis)continuity of the model is unknown. In this case, the inference for the regression parameters of the model also becomes difficult and subsampling can be used again. In addition, we consider an hypothesis test for the continuity of a SETAR model. A simulation study examines small sample performance and an application illustrates how the proposed methodology works in practice.  相似文献   

2.
We characterize the robustness of subsampling procedures by deriving a formula for the breakdown point of subsampling quantiles. This breakdown point can be very low for moderate subsampling block sizes, which implies the fragility of subsampling procedures, even when they are applied to robust statistics. This instability arises also for data driven block size selection procedures minimizing the minimum confidence interval volatility index, but can be mitigated if a more robust calibration method can be applied instead. To overcome these robustness problems, we introduce a consistent robust subsampling procedure for M-estimators and derive explicit subsampling quantile breakdown point characterizations for MM-estimators in the linear regression model. Monte Carlo simulations in two settings where the bootstrap fails show the accuracy and robustness of the robust subsampling relative to the subsampling.  相似文献   

3.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

4.
It is well understood that the two most popular empirical models of location choice - conditional logit and Poisson - return identical coefficient estimates when the regressors are not individual specific. We show that these two models differ starkly in terms of their implied predictions. The conditional logit model represents a zero-sum world, in which one region’s gain is the other regions’ loss. In contrast, the Poisson model implies a positive-sum economy, in which one region’s gain is no other region’s loss. We also show that all intermediate cases can be represented as a nested logit model with a single outside option. The nested logit turns out to be a linear combination of the conditional logit and Poisson models. Conditional logit and Poisson elasticities mark the polar cases and can therefore serve as boundary values in applied research.  相似文献   

5.
Most rational expectations models involve equations in which the dependent variable is a function of its lags and its expected future value. We investigate the asymptotic bias of generalized method of moment (GMM) and maximum likelihood (ML) estimators in such models under misspecification. We consider several misspecifications, and focus more specifically on the case of omitted dynamics in the dependent variable. In a stylized DGP, we derive analytically the asymptotic biases of these estimators. We establish that in many cases of interest the two estimators of the degree of forward-lookingness are asymptotically biased in opposite direction with respect to the true value of the parameter. We also propose a quasi-Hausman test of misspecification based on the difference between the GMM and ML estimators. Using Monte-Carlo simulations, we show that the ordering and direction of the estimators still hold in a more realistic New Keynesian macroeconomic model. In this set-up, misspecification is in general found to be more harmful to GMM than to ML estimators.  相似文献   

6.
Parameter estimation and bias correction for diffusion processes   总被引:1,自引:0,他引:1  
This paper considers parameter estimation for continuous-time diffusion processes which are commonly used to model dynamics of financial securities including interest rates. To understand why the drift parameters are more difficult to estimate than the diffusion parameter, as observed in previous studies, we first develop expansions for the bias and variance of parameter estimators for two of the most employed interest rate processes, Vasicek and CIR processes. Then, we study the first order approximate maximum likelihood estimator for linear drift processes. A parametric bootstrap procedure is proposed to correct bias for general diffusion processes with a theoretical justification. Simulation studies confirm the theoretical findings and show that the bootstrap proposal can effectively reduce both the bias and the mean square error of parameter estimates, for both univariate and multivariate processes. The advantages of using more accurate parameter estimators when calculating various option prices in finance are demonstrated by an empirical study.  相似文献   

7.
This paper explores the inferential question in semiparametric binary response models when the continuous support condition is not satisfied and all regressors have discrete support. I focus mainly on the models under the conditional median restriction, as in Manski (1985). I find sharp bounds on the components of the parameter of interest and outline several applications. The formulas for bounds obtained using a recursive procedure help analyze cases where one regressor’s support becomes increasingly dense. Furthermore, I investigate asymptotic properties of estimators of the identification set. I describe a relation between the maximum score estimation and support vector machines and propose several approaches to address the problem of empty identification sets when the model is misspecified.  相似文献   

8.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

9.
It is well known that for continuous time models with a linear drift standard estimation methods yield biased estimators for the mean reversion parameter both in finite discrete samples and in large in-fill samples. In this paper, we obtain two expressions to approximate the bias of the least squares/maximum likelihood estimator of the mean reversion parameter in the Ornstein–Uhlenbeck process with a known long run mean when discretely sampled data are available. The first expression mimics the bias formula of Marriott and Pope (1954) for the discrete time model. Simulations show that this expression does not work satisfactorily when the speed of mean reversion is slow. Slow mean reversion corresponds to the near unit root situation and is empirically realistic for financial time series. An improvement is made in the second expression where a nonlinear correction term is included into the bias formula. It is shown that the nonlinear term is important in the near unit root situation. Simulations indicate that the second expression captures the magnitude, the curvature and the non-monotonicity of the actual bias better than the first expression.  相似文献   

10.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

11.
Ploberger and Phillips (Econometrica, Vol. 71, pp. 627–673, 2003) proved a result that provides a bound on how close a fitted empirical model can get to the true model when the model is represented by a parameterized probability measure on a finite dimensional parameter space. The present note extends that result to cases where the parameter space is infinite dimensional. The results have implications for model choice in infinite dimensional problems and highlight some of the difficulties, including technical difficulties, presented by models of infinite dimension. Some implications for forecasting are considered and some applications are given, including the empirically relevant case of vector autoregression (VAR) models of infinite order.  相似文献   

12.
Cross‐validation is a widely used tool in selecting the smoothing parameter in a non‐parametric procedure. However, it suffers from large sampling variation and tends to overfit the data set. Many attempts have been made to reduce the variance of cross‐validation. This paper focuses on two recent proposals of extrapolation‐based cross‐validation bandwidth selectors: indirect cross‐validation and subsampling‐extrapolation technique. In univariate case, we notice that using a fixed value parameter surrogate for indirect cross‐validation works poorly when the true density is hard to estimate, while the subsampling‐extrapolation technique is more robust to non‐normality. We investigate whether a hybrid bandwidth selector could benefit from the advantages of both approaches and compare the performance of different extrapolation‐based bandwidth selectors through simulation studies, real data analyses and large sample theory. A discussion on their extension to bivariate case is also presented.  相似文献   

13.
In many econometric models the asymptotic variance of a parameter estimate depends on the value of another structural parameter in such a way that the data contain little information about the former when the latter is close to a critical value. This paper introduces the zero-information-limit-condition (ZILC) to identify such models where ‘weak identification’ leads to spurious inference. We find that standard errors tend to be underestimated in these cases, but the size of the asymptotic t-test may either be too great (the intuitive case emphasized in the ‘weak instrument’ literature) or too small as in two cases illustrated here.  相似文献   

14.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

15.
Modeling the joint term structure of interest rates in the United States and the European Union, the two largest economies in the world, is extremely important in international finance. In this article, we provide both theoretical and empirical analysis of multi-factor joint affine term structure models (ATSM) for dollar and euro interest rates. In particular, we provide a systematic classification of multi-factor joint ATSM similar to that of Dai and Singleton (2000). A principal component analysis of daily dollar and euro interest rates reveals four factors in the data. We estimate four-factor joint ATSM using the approximate maximum likelihood method of [A?t-Sahalia, 2002] and [A?t-Sahalia, forthcoming] and compare the in-sample and out-of-sample performances of these models using some of the latest nonparametric methods. We find that a new four-factor model with two common and two local factors captures the joint term structure dynamics in the US and the EU reasonably well.  相似文献   

16.
We consider the problem of estimating the variance of the partial sums of a stationary time series that has either long memory, short memory, negative/intermediate memory, or is the first-difference of such a process. The rate of growth of this variance depends crucially on the type of memory, and we present results on the behavior of tapered sums of sample autocovariances in this context when the bandwidth vanishes asymptotically. We also present asymptotic results for the case that the bandwidth is a fixed proportion of sample size, extending known results to the case of flat-top tapers. We adopt the fixed proportion bandwidth perspective in our empirical section, presenting two methods for estimating the limiting critical values—both the subsampling method and a plug-in approach. Simulation studies compare the size and power of both approaches as applied to hypothesis testing for the mean. Both methods perform well–although the subsampling method appears to be better sized–and provide a viable framework for conducting inference for the mean. In summary, we supply a unified asymptotic theory that covers all different types of memory under a single umbrella.  相似文献   

17.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

18.
This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002). In this partially identified model, we propose a new estimator based on MT’s modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges at a nearly cube-root-n rate. We propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.  相似文献   

19.
This paper concerns identification and estimation of a finite-dimensional parameter in a panel data-model under nonignorable sample attrition. Attrition can depend on second period variables which are unobserved for the attritors but an independent refreshment sample from the marginal distribution of the second period values is available. This paper shows that under a quasi-separability assumption, the model implies a set of conditional moment restrictions where the moments contain the attrition function as an unknown parameter. This formulation leads to (i) a simple proof of identification under strictly weaker conditions than those in the existing literature and, more importantly, (ii) a sieve-based root-nn consistent estimate of the finite-dimensional parameter of interest. These methods are applicable to both linear and nonlinear panel data models with endogenous attrition and analogous methods are applicable to situations of endogenously missing data in a single cross-section. The theory is illustrated with a simulation exercise, using Current Population Survey data where a panel structure is introduced by the rotation group feature of the sampling process.  相似文献   

20.
A popular macroeconomic forecasting strategy utilizes many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude dynamic stochastic general equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we use the linear opinion pool to combine inflation forecast densities from many vector autoregressions (VARs) and a policymaking DSGE model. The DSGE receives a substantial weight in the pool (at short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, but produces well-calibrated forecast densities for inflation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号