首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are ‘optimal’ according to a given score and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility, however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios and using both artificially simulated and empirical data.  相似文献   

2.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

3.
This paper compares the predictive performance of five recent models of banks' demand for borrowed reserves using post-sample simulation. The findings indicate that models which incorporate observed nonlinearity and switching in the borrowings-interest rate spread relationship outperform the linear nonswitching model. However, the best performance is obtained from the model in which switching and aggregation are considered in the theoretical derivation. Forecasting the level of borrowed reserves is critical to the FOMC reserve targeting procedure. Hence, a comparison of model robustness and stability using post-sample simulation provides useful information to the FOMC in its search for a reliable borrowed reserves demand model.  相似文献   

4.
Our study focuses on two data set, the former provides the expenditures for several services for each family and the latter contains socio-demographic variables for the same statistical units. The main aim is to analyze, in a Correspondence Analysis context, the service expenditure of families based on the whole given data-set under two types of constraints: the global relative expenses for a given service and the global relative expenses for a given socio-demographic category. The purpose of measuring the relationship between expenditure on social services and the socio-demographic characteristics of families is conducted in an exploratory and predictive perspective. A new approach is then introduced which ensures compliance with the required constraints. Moreover, through a procedure, we have obtained a table of regression coefficients. This table shows interesting properties and it is easy to interpret. Finally, the performance of the results has been evaluated using computer-based resampling techniques.  相似文献   

5.
Dynamic model averaging (DMA) has become a very useful tool with regards to dealing with two important aspects of time-series analysis, namely, parameter instability and model uncertainty. An important component of DMA is the Kalman filter. It is used to filter out the latent time-varying regression coefficients of the predictive regression of interest, and produce the model predictive likelihood, which is needed to construct the probability of each model in the model set. To apply the Kalman filter, one must write the model of interest in linear state–space form. In this study, we demonstrate that the state–space representation has implications on out-of-sample prediction performance, and the degree of shrinkage. Using Monte Carlo simulations as well as financial data at different sampling frequencies, we document that the way in which the current literature tends to formulate the candidate time-varying parameter predictive regression in linear state–space form ignores empirical features that are often present in the data at hand, namely, predictor persistence and predictor endogeneity. We suggest a straightforward way to account for these features in the DMA setting. Results using the widely applied Goyal and Welch (2008) dataset document that modifying the DMA framework as we suggest has a bearing on equity premium point prediction performance from a statistical as well as an economic viewpoint.  相似文献   

6.
The serious economic crisis broken out in 2008 highly stressed the limitations of GDP used as a well-being indicator and as a predictive tool for economy. This induced the need to identify new indicators able to link the economic prosperity of a country to aspects of sustainable development and externalities, both positive and negative, in the long run. The aim of this paper is to introduce a structured approach which supports the choice or the construction of alternative indicators to GDP. The starting point is the definition of what a well-being indicator actually should represent according to the Recommendations of the Stiglitz-Sen-Fitoussi Report on the measurement of economic performance and social progress. Then the paper introduces a systematic procedure for the analysis of well-being indicators. The different phases of this procedure entail the checking of indicators technical properties and their effect on the representational efficacy. Finally, some of the most representative well-being indicators drawn from the literature are compared and a detailed application example is proposed.  相似文献   

7.
In a regression context, consider the difference in expected outcome associated with a particular difference in one of the input variables. If the true regression relationship involves interactions, then this predictive comparison can depend on the values of the other input variables. Therefore, one may wish to consider an average predictive comparison as a target of inference, where the averaging is with respect to the population distribution of the input variables. We consider inferences about such targets, with emphasis on inferential performance when the regression model is misspecified. Particularly, in light of the difficulties in dealing with interaction terms in regression models, we examine inferences about average predictive comparisons when additive models are fitted to relationships truly involving pairwise interaction terms. We identify some circumstances where such inferences are consistent despite the model misspecification, notably when the input variables are independent, or have a multivariate normal distribution.  相似文献   

8.
Decomposing Granger causality over the spectrum allows us to disentangle potentially different Granger causality relationships over different frequencies. This may yield new and complementary insights compared to traditional versions of Granger causality. In this paper, we compare two existing approaches in the frequency domain, proposed originally by Pierce [Pierce, D. A. (1979). R-squared measures for time series. Journal of the American Statistical Association, 74, 901–910] and Geweke [Geweke, J. (1982). Measurement of linear dependence and feedback between multiple time series. Journal of the American Statistical Association, 77, 304–324], and introduce a new testing procedure for the Pierce spectral Granger causality measure. To provide insights into the relative performance of this test, we study its power properties by means of Monte Carlo simulations. In addition, we apply the methodology in the context of the predictive value of the European production expectation surveys. This predictive content is found to vary widely with the frequency considered, illustrating the usefulness of not restricting oneself to a single overall test statistic.  相似文献   

9.
Peixin Zhao  Liugen Xue 《Metrika》2011,74(2):231-245
This paper focuses on variable selections for varying coefficient models when some covariates are measured with errors. We present a bias-corrected variable selection procedure by combining basis function approximations with shrinkage estimations. With appropriate selection of the tuning parameters, we establish the consistency of the variable selection procedure, and derive the optimal convergence rate of the regularized estimators. A simulation study and a real data application are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

10.
The inability of empirical models to forecast exchange rates has given rise to the belief that exchange rates are disconnected from macroeconomic fundamentals. This paper addresses the potential disconnect by endogenously selecting forecast models from a broad set of fundamentals. The procedure shows that exchange rates are not disconnected from fundamentals, but fundamentals vary in their predictive content at different forecast horizons and for different currencies. Performing model selection out‐of‐sample is challenging. At short horizons, the method cannot outperform a random walk, although the performance is improved at long horizons. These findings are confirmed across currencies and forecast evaluation methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Being able to anticipate crime such that new crime events can be dealt with effectively or prevented entirely, leads police forces worldwide to look at applying predictive policing, which provides predictions of times and places at risk for crime, such that proactive preventative measures can be taken. Ideally, predictive policing models predict crime at a high spatio-temporal level, while also providing optimal prediction performance. The main objective of this paper is therefore to evaluate the impact of varying grid resolution, temporal resolution and historical time frame on prediction performance. To investigate this, we analyse home burglary data from a large city in Belgium and predict new crime events using a range of parameter values, comparing the resulting prediction performances. Given the potential prediction performance costs associated with prediction at a high spatio-temporal resolution, consideration should be given to balance practical requirements with performance requirements.  相似文献   

12.
Inspired by cross-market information flows among international stock markets, we incorporate external predictive information from other cryptocurrency markets to forecast the realized volatility (RV) of Bitcoin. To make the most of such external information, we employ six widely accepted approaches to construct predictive models based on multivariate information. Our results suggest that the scaled principal component analysis (SPCA) approach steadily improves the predictive ability of the prevailing heterogeneous autoregressive (HAR) benchmark model considering both the model confidence set (MCS) test and the Diebold–Mariano (DM) test based on three widely accepted loss functions. The forecasting performance is persistent to various robustness checks and extensions. Notably, a mean–variance investor can obtain steady positive economic gains if the investment portfolio is constructed on the basis of the forecasts from the HAR-SPCA model. The results of this study show that external predictive information is statistically and economically important in forecasting Bitcoin RV.  相似文献   

13.
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as ‘approximate Bayesian forecasting’. The four key issues explored are: (i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; (ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; (iii) the performance of approximate Bayesian forecasting in state space models; and (iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.  相似文献   

14.
In a simple multivariate normal prediction setting, derivation of a predictive distribution can flow from formal Bayes arguments as well as pivoting arguments. We look at two special cases and show that the classical invariant predictive distribution is based on a pivot whose sampling distribution depends on the parameter – that is, the pivot is not an ancillary statistic. In contrast, a predictive distribution derived by a structural argument is based on a pivot with a parameter free distribution (an ancillary statistic). The classical procedure is formal Bayes for the Jeffreys prior. Our results show that this procedure does not have a structural or fiducial interpretation.  相似文献   

15.
In this paper we investigate the out-of-sample forecasting ability of feedforward and recurrent neural networks based on empirical foreign exchange rate data. A two-step procedure is proposed to construct suitable networks, in which networks are selected based on the predictive stochastic complexity (PSC) criterion, and the selected networks are estimated using both recursive Newton algorithms and the method of nonlinear least squares. Our results show that PSC is a sensible criterion for selecting networks and for certain exchange rate series, some selected network models have significant market timing ability and/or significantly lower out-of-sample mean squared prediction error relative to the random walk model.  相似文献   

16.
Drawing on recent empirical research, we study whether the international business cycle, as measured in terms of the output gaps of the G7 countries, has out-of-sample predictive power for gold-price fluctuations. To this end, we use a real-time forecasting approach that accounts for model uncertainty and model instability. We find some evidence that the international business cycle has predictive power for gold-price fluctuations. After accounting for transaction costs, a simple trading rule that builds on real-time out-of-sample forecasts does not lead to a superior performance relative to a buy-and-hold strategy. We also suggest a behavioral-finance approach to study the quality of out-of-sample forecasts from the perspective of forecasters with potentially asymmetric loss functions.  相似文献   

17.
The paper addresses the issue of forecasting a large set of variables using multivariate models. In particular, we propose three alternative reduced rank forecasting models and compare their predictive performance for US time series with the most promising existing alternatives, namely, factor models, large‐scale Bayesian VARs, and multivariate boosting. Specifically, we focus on classical reduced rank regression, a two‐step procedure that applies, in turn, shrinkage and reduced rank restrictions, and the reduced rank Bayesian VAR of Geweke ( 1996 ). We find that using shrinkage and rank reduction in combination rather than separately improves substantially the accuracy of forecasts, both when the whole set of variables is to be forecast and for key variables such as industrial production growth, inflation, and the federal funds rate. The robustness of this finding is confirmed by a Monte Carlo experiment based on bootstrapped data. We also provide a consistency result for the reduced rank regression valid when the dimension of the system tends to infinity, which opens the way to using large‐scale reduced rank models for empirical analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We propose a Bayesian estimation procedure for the generalized Bass model that is used in product diffusion models. Our method forecasts product sales early based on previous similar markets; that is, we obtain pre-launch forecasts by analogy. We compare our forecasting proposal to traditional estimation approaches, and alternative new product diffusion specifications. We perform several simulation exercises, and use our method to forecast the sales of room air conditioners, BlackBerry handheld devices, and compressed natural gas. The results show that our Bayesian proposal provides better predictive performances than competing alternatives when little or no historical data are available, which is when sales projections are the most useful.  相似文献   

19.
This article compares the predictive ability of the factor models of Stock and Watson (2002a) and Forni, Hallin, Lippi and Reichlin (2005) using a ‘large’ panel of macroeconomic variables of the United States. We propose a nesting procedure of comparison that clarifies and partially overturns the results of similar exercises in the literature. Our main conclusion is that with the dataset at hand the two methods have a similar performance and produce highly collinear forecasts.  相似文献   

20.
The paper proposes a novel inference procedure for long-horizon predictive regression with persistent regressors, allowing the autoregressive roots to lie in a wide vicinity of unity. The invalidity of conventional tests when regressors are persistent has led to a large literature dealing with inference in predictive regressions with local to unity regressors. Magdalinos and Phillips (2009b) recently developed a new framework of extended IV procedures (IVX) that enables robust chi-square testing for a wider class of persistent regressors. We extend this robust procedure to an even wider parameter space in the vicinity of unity and apply the methods to long-horizon predictive regression. Existing methods in this model, which rely on simulated critical values by inverting tests under local to unity conditions, cannot be easily extended beyond the scalar regressor case or to wider autoregressive parametrizations. In contrast, the methods developed here lead to standard chi-square tests, allow for multivariate regressors, and include predictive processes whose roots may lie in a wide vicinity of unity. As such they have many potential applications in predictive regression. In addition to asymptotics under the null hypothesis of no predictability, the paper investigates validity under the alternative, showing how balance in the regression may be achieved through the use of localizing coefficients and developing local asymptotic power properties under such alternatives. These results help to explain some of the empirical difficulties that have been encountered in establishing predictability of stock returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号