首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Journal of econometrics》2002,109(2):341-363
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the “best” empirical model developed without common cycle restrictions need not nest the “best” model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan–Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.  相似文献   

2.
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. A Monte Carlo study explores the finite sample performance of this procedure and evaluates the forecasting accuracy of models selected by this procedure. Two empirical applications confirm the usefulness of the model selection procedure proposed here for forecasting.  相似文献   

3.
This paper studies the predictability of cryptocurrency time series. We compare several alternative univariate and multivariate models for point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto-predictors and rely on dynamic model averaging to combine a large set of univariate dynamic linear models and several multivariate vector autoregressive models with different forms of time variation. We find statistically significant improvements in point forecasting when using combinations of univariate models, and in density forecasting when relying on the selection of multivariate models. Both schemes deliver sizable directional predictability.  相似文献   

4.
The performance of information criteria and tests for residual heteroscedasticity for choosing between different models for time‐varying volatility in the context of structural vector autoregressive analysis is investigated. Although it can be difficult to find the true volatility model with the selection criteria, using them is recommended because they can reduce the mean squared error of impulse response estimates substantially relative to a model that is chosen arbitrarily based on the personal preferences of a researcher. Heteroscedasticity tests are found to be useful tools for deciding whether time‐varying volatility is present but do not discriminate well between different types of volatility changes. The selection methods are illustrated by specifying a model for the global market for crude oil.  相似文献   

5.
Many popular methods of model selection involve minimizing a penalized function of the data (such as the maximized log-likelihood or the residual sum of squares) over a set of models. The penalty in the criterion function is controlled by a penalty multiplier λ which determines the properties of the procedure. In this paper, we first review model selection criteria of the simple form “Loss + Penalty” and then propose studying such model selection criteria as functions of the penalty multiplier. This approach can be interpreted as exploring the stability of model selection criteria through what we call model selection curves. It leads to new insights into model selection and new proposals on how to select models. We use the bootstrap to enhance the basic model selection curve and develop convenient numerical and graphical summaries of the results. The methodology is illustrated on two data sets and supported by a small simulation. We show that the new methodology can outperform methods such as AIC and BIC which correspond to single points on a model selection curve.  相似文献   

6.
VAR FORECASTING USING BAYESIAN VARIABLE SELECTION   总被引:1,自引:0,他引:1  
This paper develops methods for automatic selection of variables in Bayesian vector autoregressions (VARs) using the Gibbs sampler. In particular, I provide computationally efficient algorithms for stochastic variable selection in generic linear and nonlinear models, as well as models of large dimensions. The performance of the proposed variable selection method is assessed in forecasting three major macroeconomic time series of the UK economy. Data‐based restrictions of VAR coefficients can help improve upon their unrestricted counterparts in forecasting, and in many cases they compare favorably to shrinkage estimators. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
We propose methods for testing hypothesis of non-causality at various horizons, as defined in Dufour and Renault (Econometrica 66, (1998) 1099–1125). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the US economy.  相似文献   

8.
This paper develops methods of Bayesian inference in a sample selection model. The main feature of this model is that the outcome variable is only partially observed. We first present a Gibbs sampling algorithm for a model in which the selection and outcome errors are normally distributed. The algorithm is then extended to analyze models that are characterized by nonnormality. Specifically, we use a Dirichlet process prior and model the distribution of the unobservables as a mixture of normal distributions with a random number of components. The posterior distribution in this model can simultaneously detect the presence of selection effects and departures from normality. Our methods are illustrated using some simulated data and an abstract from the RAND health insurance experiment.  相似文献   

9.
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper, we develop a new time varying parameter model which permits cointegration. We use a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP–VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving the Fisher effect.  相似文献   

10.
Nine macroeconomic variables are forecast in a real-time scenario using a variety of flexible specification, fixed specification, linear, and nonlinear econometric models. All models are allowed to evolve through time, and our analysis focuses on model selection and performance. In the context of real-time forecasts, flexible specification models (including linear autoregressive models with exogenous variables and nonlinear artificial neural networks) appear to offer a useful and viable alternative to less flexible fixed specification linear models for a subset of the economic variables which we examine, particularly at forecast horizons greater than 1-step ahead. We speculate that one reason for this result is that the economy is evolving (rather slowly) over time. This feature cannot easily be captured by fixed specification linear models, however, and manifests itself in the form of evolving coefficient estimates. We also provide additional evidence supporting the claim that models which ‘win’ based on one model selection criterion (say a squared error measure) do not necessarily win when an alternative selection criterion is used (say a confusion rate measure), thus highlighting the importance of the particular cost function which is used by forecasters and ‘end-users’ to evaluate their models. A wide variety of different model selection criteria and statistical tests are used to illustrate our findings.  相似文献   

11.
In this paper we describe methods for predicting distributions of outcome gains in the framework of a latent variable selection model. We describe such procedures for Student‐t selection models and a finite mixture of Gaussian selection models. Importantly, our algorithms for fitting these models are simple to implement in practice, and also permit learning to take place about the non‐identified cross‐regime correlation parameter. Using data from High School and Beyond, we apply our methods to determine the impact of dropping out of high school on a math test score taken at the senior year of high school. Our results show that selection bias is an important feature of this data, that our beliefs about this non‐identified correlation are updated from the data, and that generalized models of selectivity offer an improvement over the ‘textbook’ Gaussian model. Further, our results indicate that on average dropping out of high school has a large negative impact on senior‐year test scores. However, for those individuals who actually drop out of high school, the act of dropping out of high school does not have a significantly negative impact on test scores. This suggests that policies aimed at keeping students in school may not be as beneficial as first thought, since those individuals who must be induced to stay in school are not the ones who benefit significantly (in terms of test scores) from staying in school. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
This paper develops estimators for dynamic microeconomic models with serially correlated unobserved state variables using sequential Monte Carlo methods to estimate the parameters and the distribution of the unobservables. If persistent unobservables are ignored, the estimates can be subject to a dynamic form of sample selection bias. We focus on single‐agent dynamic discrete‐choice models and dynamic games of incomplete information. We propose a full‐solution maximum likelihood procedure and a two‐step method and use them to estimate an extended version of the capital replacement model of Rust with the original data and in a Monte Carlo study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a new methodology for ranking in probability the commonly proposed drivers of inflation in the new Keynesian model. The approach is based on Bayesian model selection among restricted vector autoregressive (VAR) models, each of which embodies only one or none of the candidate variables as the driver. Simulation experiments suggest that our procedure is superior to the previously used conventional pairwise Granger causality tests in detecting the true driver. Empirical results lend little support to labour share, output gap or unemployment rate as the driver of US inflation.  相似文献   

14.
We consider a class of random effects models for clustered multivariate binary data based on the threshold crossing technique of a latent random vector. Components of this latent vector are assumed to have a Laird–Ware structure. However, in place of their Gaussian assumptions, any specified class of multivariate distribution is allowed for the random effects, and the error vector is allowed to have any strictly positive pdf. A well known member of this class of models is the multivariate probit model with random effects. We investigate sufficient and necessary conditions for the existence of maximum likelihood estimates for the location and the association parameters. Implications of our results are illustrated through some hypothetical examples.  相似文献   

15.
The prevalent estimation methods for the sample selection model rely heavily on parametric assumptions and are sensitive to departures from the underlying parametric assumptions [see, e.g., Goldberger (1983)]. We propose an alternative estimation method, the corrected maximum likelihood estimate, which is consistent for the slope vector in the outcome equation up to a multiplicative scalar, even through the parametric model on which the estimate is based might be misspecified. As an important corollary, it follows from our result that Olsen's (1980) corrected ordinary least squares estimate is consistent if the outcome equation is linear, without requiring Olsen's assumptions on the joint error distribution.  相似文献   

16.
This paper discusses pooling versus model selection for nowcasting with large datasets in the presence of model uncertainty. In practice, nowcasting a low‐frequency variable with a large number of high‐frequency indicators should account for at least two data irregularities: (i) unbalanced data with missing observations at the end of the sample due to publication delays; and (ii) different sampling frequencies of the data. Two model classes suited in this context are factor models based on large datasets and mixed‐data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst other things, the factor estimation method and the number of factors, lag length and indicator selection. Thus there are many sources of misspecification when selecting a particular model, and an alternative would be pooling over a large set of different model specifications. We evaluate the relative performance of pooling and model selection for nowcasting quarterly GDP for six large industrialized countries. We find that the nowcast performance of single models varies considerably over time, in line with the forecasting literature. Model selection based on sequential application of information criteria can outperform benchmarks. However, the results highly depend on the selection method chosen. In contrast, pooling of nowcast models provides an overall very stable nowcast performance over time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
This paper focuses on the dynamic misspecification that characterizes the class of small‐scale New Keynesian models currently used in monetary and business cycle analysis, and provides a remedy for the typical difficulties these models have in accounting for the rich contemporaneous and dynamic correlation structure of the data. We suggest using a statistical model for the data as a device through which it is possible to adapt the econometric specification of the New Keynesian model such that the risk of omitting important propagation mechanisms is kept under control. A pseudo‐structural form is built from the baseline system of Euler equations by forcing the state vector of the system to have the same dimension as the state vector characterizing the statistical model. The pseudo‐structural form gives rise to a set of cross‐equation restrictions that do not penalize the autocorrelation structure and persistence of the data. Standard estimation and evaluation methods can be used. We provide an empirical illustration based on USA quarterly data and a small‐scale monetary New Keynesian model.  相似文献   

18.
This paper proposes downside risk measure models in portfolio selection that captures uncertainties both in distribution and in parameters. The worst-case distribution with given information on the mean value and the covariance matrix is used, together with ellipsoidal and polytopic uncertainty sets, to build-up this type of downside risk model. As an application of the models, the tracking error portfolio selection problem is considered. By lifting the vector variables to positive semidefinite matrix variables, we obtain semidefinite programming formulations of the robust tracking portfolio models. Numerical results are presented in tracking SSE50 of the Shanghai Stock Exchange. Compared with the tracking error variance portfolio model and the equally weighted strategy, the proposed models are more stable, have better accumulated wealth and have much better Sharpe ratio in the investment period for the majority of observed instances.  相似文献   

19.
We review some results on the analysis of longitudinal data or, more generally, of repeated measures via linear mixed models starting with some exploratory statistical tools that may be employed to specify a tentative model. We follow with a summary of inferential procedures under a Gaussian set‐up and then discuss different diagnostic methods focusing on residual analysis but also addressing global and local influence. Based on the interpretation of diagnostic plots related to three types of residuals (marginal, conditional and predicted random effects) as well as on other tools, we proceed to identify remedial measures for possible violations of the proposed model assumptions, ranging from fine‐tuning of the model to the use of elliptically symmetric or skew‐elliptical linear mixed models as well as of robust estimation methods. We specify many results available in the literature in a unified notation and highlight those with greater practical appeal. In each case, we discuss the availability of model diagnostics as well as of software and give general guidelines for model selection. We conclude with analyses of three practical examples and suggest further directions for research.  相似文献   

20.
This paper examines two methods of modeling binary choice with social interactions: models assuming homogeneous rational expectations and models using subjective data on expectations. Exploiting a unique survey conducted during the 1996 US presidential election that was designed to study voting behavior under social context, we find that in various model specifications using subjective expectations consistently improves models' goodness‐of‐fit; and that subjective expectations are not rational as formulated by Brock and Durlauf. Specifically, members' characteristics are individually important in forming expectations. We also include correlated effect in the rational expectation model. This extension provides a remedy to the selection issues that often arise in social interaction models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号