首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

2.
We consider the problem of estimating and testing for multiple breaks in a single‐equation framework with regressors that are endogenous, i.e. correlated with the errors. We show that even in the presence of endogenous regressors it is still preferable, in most cases, to simply estimate the break dates and test for structural change using the usual ordinary least squares (OLS) framework. Except for some knife‐edge cases, it delivers estimates of the break dates with higher precision and tests with higher power compared to those obtained using an instrumental variable (IV) method. Also, the OLS method avoids potential weak identification problems caused by weak instruments. To illustrate the relevance of our theoretical results, we consider the stability of the New Keynesian hybrid Phillips curve. IV‐based methods only provide weak evidence of instability. On the other hand, OLS‐based ones strongly indicate a change in 1991:Q1 and that after this date the model loses all explanatory power. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
We propose methods for constructing confidence sets for the timing of a break in level and/or trend that have asymptotically correct coverage for both I(0) and I(1) processes. These are based on inverting a sequence of tests for the break location, evaluated across all possible break dates. We separately derive locally best invariant tests for the I(0) and I(1) cases; under their respective assumptions, the resulting confidence sets provide correct asymptotic coverage regardless of the magnitude of the break. We suggest use of a pre-test procedure to select between the I(0)- and I(1)-based confidence sets, and Monte Carlo evidence demonstrates that our recommended procedure achieves good finite sample properties in terms of coverage and length across both I(0) and I(1) environments. An application using US macroeconomic data is provided which further evinces the value of these procedures.  相似文献   

4.
This paper compares the forecasting performance of models that have been proposed for forecasting in the presence of structural breaks. They differ in their treatment of the break process, the model applied in each regime and the out‐of‐sample probability of a break. In an extensive empirical evaluation, we demonstrate the presence of breaks and their importance for forecasting. We find no single model that consistently works best in the presence of breaks. In many cases, the formal modeling of the break process is important in achieving a good forecast performance. However, there are also many cases where rolling window forecasts perform well. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
We develop an easy-to-implement method for forecasting a stationary autoregressive fractionally integrated moving average (ARFIMA) process subject to structural breaks with unknown break dates. We show that an ARFIMA process subject to a mean shift and a change in the long memory parameter can be well approximated by an autoregressive (AR) model and suggest using an information criterion (AIC or Mallows’ CpCp) to choose the order of the approximate AR model. Our method avoids the issue of estimation inaccuracy of the long memory parameter and the issue of spurious breaks in finite sample. Insights from our theoretical analysis are confirmed by Monte Carlo experiments, through which we also find that our method provides a substantial improvement over existing prediction methods. An empirical application to the realized volatility of three exchange rates illustrates the usefulness of our forecasting procedure. The empirical success of the HAR-RV model can be explained, from an econometric perspective, by our theoretical and simulation results.  相似文献   

6.
This paper proposes a model to predict recessions that accounts for non‐linearity and a structural break when the spread between long‐ and short‐term interest rates is the leading indicator. Estimation and model selection procedures allow us to estimate and identify time‐varying non‐linearity in a VAR. The structural break threshold VAR (SBTVAR) predicts better the timing of recessions than models with constant threshold or with only a break. Using real‐time data, the SBTVAR with spread as leading indicator is able to anticipate correctly the timing of the 2001 recession. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
Statistical tolerance intervals for discrete distributions are widely employed for assessing the magnitude of discrete characteristics of interest in applications like quality control, environmental monitoring, and the validation of medical devices. For such data problems, characterizing extreme counts or outliers is also of considerable interest. These applications typically use traditional discrete distributions, like the Poisson, binomial, and negative binomial. The discrete Pareto distribution is an alternative yet flexible model for count data that are heavily right‐skewed. Our contribution is the development of statistical tolerance limits for the discrete Pareto distribution as a strategy for characterizing the extremeness of observed counts in the tail. We discuss the coverage probabilities of our procedure in the broader context of known coverage issues for statistical intervals for discrete distributions. We address this issue by applying a bootstrap calibration to the confidence level of the asymptotic confidence interval for the discrete Pareto distribution's parameter. We illustrate our procedure on a dataset involving cyst formation in mice kidneys.  相似文献   

8.
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

9.
This paper develops methods for estimating and forecasting in Bayesian panel vector autoregressions of large dimensions with time‐varying parameters and stochastic volatility. We exploit a hierarchical prior that takes into account possible pooling restrictions involving both VAR coefficients and the error covariance matrix, and propose a Bayesian dynamic learning procedure that controls for various sources of model uncertainty. We tackle computational concerns by means of a simulation‐free algorithm that relies on analytical approximations to the posterior. We use our methods to forecast inflation rates in the eurozone and show that these forecasts are superior to alternative methods for large vector autoregressions.  相似文献   

10.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

11.
This paper considers Markov error‐correction (MEC) models in which deviations from the long‐run equilibrium are characterized by different rates of adjustment. To motivate our analysis and illustrate the various issues involved, our discussion is structured around the analysis of the long‐run properties of US stock prices and dividends. It is shown that the MEC model is flexible enough to account for situations where deviations from the long‐run equilibrium are nonstationary in one of the states of nature and allows us to test for such a possibility. An empirical specification procedure to establish the existence of MEC adjustment in practice is also presented. This is based on a multi‐step test procedure that exploits the differences between the global and local characteristics of systems with MEC adjustment. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso‐type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast realized covariance matrices almost as precisely as if we had known the true driving dynamics of these in advance. We next investigate the sources of these driving dynamics as well as the performance of the proposed models for forecasting the realized covariance matrices of the 30 Dow Jones stocks. We find that the dynamics are not stable as the data are aggregated from the daily to lower frequencies. Furthermore, we are able beat our benchmark by a wide margin. Finally, we investigate the economic value of our forecasts in a portfolio selection exercise and find that in certain cases an investor is willing to pay a considerable amount in order get access to our forecasts. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear, non‐Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis–Hastings algorithm or in importance sampling. Our method provides a computationally more efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic intensity and stochastic volatility models based on Ornstein–Uhlenbeck processes. For our empirical study, we analyse the performance of our methods for corporate default panel data and stock index returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Forecasts play a critical role at inflation-targeting central banks, such as the Bank of England. Breaks in the forecast performance of a model can potentially incur important policy costs. However, commonly-used statistical procedures implicitly place a lot of weight on type I errors (or false positives), which results in a relatively low power of the tests to identify forecast breakdowns in small samples. We develop a procedure which aims to capture the policy cost of missing a break. We use data-based rules to find the test size that optimally trades off the costs associated with false positives with those that can result from a break going undetected for too long. In so doing, we also explicitly study forecast errors as a multivariate system. The covariance between forecast errors for different series, although often overlooked in the forecasting literature, not only enables us to consider testing in a multivariate setting, but also increases the test power. As a result, we can tailor our choice of the critical values for each series not only to the in-sample properties of each series, but also to the way in which the series of forecast errors covary.  相似文献   

16.
When location shifts occur, cointegration-based equilibrium-correction models (EqCMs) face forecasting problems. We consider alleviating such forecast failure by updating, intercept corrections, differencing, and estimating the future progress of an ‘internal’ break. Updating leads to a loss of cointegration when an EqCM suffers an equilibrium-mean shift, but helps when collinearities are changed by an ‘external’ break with the EqCM staying constant. Both mechanistic corrections help compared to retaining a pre-break estimated model, but an estimated model of the break process could outperform. We apply the approaches to EqCMs for UK M1, compared with updating a learning function as the break evolves.  相似文献   

17.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

18.
A Bayesian method for outlier‐robust estimation of multinomial choice models is presented. The method can be used for both correlated as well as uncorrelated choice alternatives and guarantees robustness towards outliers in the dependent and independent variables. To account for outliers in the response direction, the fat‐tailed multivariate Laplace distribution is used. Leverage points are handled via a shrinkage procedure. A simulation study shows that estimation of the model parameters is less influenced by outliers compared to non‐robust alternatives. An analysis of margarine scanner data shows how our method can be used for better pricing decisions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
This study seeks to advance the bottom‐line mentality literature by exploring an antecedent and outcome of employee bottom‐line mentality. We build and test a moderated‐mediation model by arguing that the personality trait of Machiavellianism promotes an employee's adoption of a bottom‐line mentality. Moreover, drawing on trait activation theory, we argue that this relationship is fully activated when the employee perceives that the organisation endorses a bottom‐line mentality. To expand our theoretical model, we also suggest that employee bottom‐line mentality inhibits organisational citizenship behaviour directed towards co‐workers. Lastly, we investigate whether an employee's perception of an organisation's bottom‐line mentality conditionally moderates the indirect effect of Machiavellianism on organisational citizenship behaviour directed towards co‐workers through the mediated mechanism of employee bottom‐line mentality. Our theoretical model is tested across two distinct studies. Study 1, a field study conducted within a variety of organisations, provides evidence for our initial predictions (Hypotheses 1 and 2). Study 2, a multisource field study conducted in multiple industries, replicates and extends the findings from Study 1 by providing evidence for the entire moderated‐mediation model. We find support for our hypothesised model across both studies. Implications for theory and practice are discussed, and suggestions for future research are identified.  相似文献   

20.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号