首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Dynamic stochastic general equilibrium (DSGE) models have recently become standard tools for policy analysis. Nevertheless, their forecasting properties have still barely been explored. In this article, we address this problem by examining the quality of forecasts of the key U.S. economic variables: the three-month Treasury bill yield, the GDP growth rate and GDP price index inflation, from a small-size DSGE model, trivariate vector autoregression (VAR) models and the Philadelphia Fed Survey of Professional Forecasters (SPF). The ex post forecast errors are evaluated on the basis of the data from the period 1994–2006. We apply the Philadelphia Fed “Real-Time Data Set for Macroeconomists” to ensure that the data used in estimating the DSGE and VAR models was comparable to the information available to the SPF.Overall, the results are mixed. When comparing the root mean squared errors for some forecast horizons, it appears that the DSGE model outperforms the other methods in forecasting the GDP growth rate. However, this characteristic turned out to be statistically insignificant. Most of the SPF's forecasts of GDP price index inflation and the short-term interest rate are better than those from the DSGE and VAR models.  相似文献   

2.
Since the bubble of the late 1990s the dividend yield appears non-stationary indicating the breakdown of the equilibrium relationship between prices and dividends. Two lines of research have developed in order to explain this apparent breakdown. First, that the dividend yield is better characterised as a non-linear process and second, that it is subject to mean level shifts. This paper jointly models both of these characteristics by allowing non-linear reversion to a changing mean level. Results support stationarity of this model for eight international dividend yield series. This model is than applied to the forecast of monthly stock returns. Evidence supports our time-varying non-linear model over linear alternatives, particularly so on the basis of an out-of-sample R-squared measure and a trading rule exercise. More detailed examination of the trading rule measure suggests that investors could obtain positive returns, as the model forecasts do not imply excessive trading such that costs would not outweigh returns. Finally, the superior performance of the non-linear model largely arises from its ability to forecast negative returns, whereas linear models are unable to do.  相似文献   

3.
When a large number of time series are to be forecast on a regular basis, as in large scale inventory management or production control, the appropriate choice of a forecast model is important as it has the potential for large cost savings through improved accuracy. A possible solution to this problem is to select one best forecast model for all the series in the dataset. Alternatively one may develop a rule that will select the best model for each series. Fildes (1989) calls the former an aggregate selection rule and the latter an individual selection rule. In this paper we develop an individual selection rule using discriminant analysis and compare its performance to aggregate selection for the quarterly series of the M-Competition data. A number of forecast accuracy measures are used for the evaluation and confidence intervals for them are constructed using bootstrapping. The results indicate that the individual selection rule based on discriminant scores is more accurate, and sometimes significantly so, than any aggregate selection method.  相似文献   

4.
We propose a new nonlinear time series model of expected returns based on the dynamics of the cross‐sectional rank of realized returns. We model the joint dynamics of a sharp jump in the cross‐sectional rank and the asset return by analyzing (1) the marginal probability distribution of a jump in the cross‐sectional rank within the context of a duration model, and (2) the probability distribution of the asset return conditional on a jump, for which we specify different dynamics depending upon whether or not a jump has taken place. As a result, the expected returns are generated by a mixture of normal distributions weighted by the probability of jumping. The model is estimated for the weekly returns of the constituents of the SP500 index from 1990 to 2000, and its performance is assessed in an out‐of‐sample exercise from 2001 to 2005. Based on the one‐step‐ahead forecast of the mixture model we propose a trading rule, which is evaluated according to several forecast evaluation criteria and compared to 18 alternative trading rules. We find that the proposed trading strategy is the dominant rule by providing superior risk‐adjusted mean trading returns and accurate value‐at‐risk forecasts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

6.
This paper describes a forty-two nonlinear equation model of the U.S. petroleum industry estimated over the period 1946 to 1973. The model specifies refinery outputs and prices as being simultaneously determined by market forces while the domestic output of crude oil is determined in a block-recursive segment of the model. The simultaneous behavioral equations are estimated with nonlinear two-stage least-squares adjusted to reflect the implications of autocorrelation for those equations where it appears to be a problem. A multi-period sample simulation, together with forecasts for 1974 and 1975 are used to evaluate the model's performance. Finally it is used to forecast to 1985 under two scenarios and compared with the Federal Energy Administration's forecast for the same period.  相似文献   

7.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

8.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

9.
Ever since the inception of betas as a measure of systematic risk, the forecast error in relation to this parameter has been a major concern to both academics and practitioners in finance. In order to reduce forecast error, this paper compares a series of competing models to forecast beta. Realized measures of asset return covariance and variance are computed and applied to forecast beta, following the advances in methodology of Andersen, Bollerslev, Diebold and Wu [Andersen, T. G., Bollerslev, T., Diebold, F. X., & Wu, J. (2005). A framework for exploring the macroeconomic determinants of systematic risk. American Economic Review, 95, 398–404; and Andersen, T. G., Bollerslev, T., Diebold, F. X., & Wu, J. (2006). Realized beta: Persistence and Predictability. In T. Fomby & D. Terrell (Eds.), Advances in Econometrics, vol 20B: Econometric Analysis of Economic and Financial Times Series., JAI Press, 1–40.]. This approach is compared with the constant beta model (the industry standard) and a variant, the random walk model. It is shown that an autoregressive model with two lags produces the lowest or close to the lowest error for quarterly stock beta forecasts. In general, the AR(2) model has a mean absolute forecast error half that of the constant beta model. This reduction in forecast error is a dramatic improvement over the benchmark constant model.  相似文献   

10.
In the context of predicting the term structure of interest rates, we explore the marginal predictive content of real-time macroeconomic diffusion indexes extracted from a “data rich” real-time data set, when used in dynamic Nelson–Siegel (NS) models of the variety discussed in Svensson (NBER technical report, 1994; NSS) and Diebold and Li (Journal of Econometrics, 2006, 130, 337–364; DNS). Our diffusion indexes are constructed using principal component analysis with both targeted and untargeted predictors, with targeting done using the lasso and elastic net. Our findings can be summarized as follows. First, the marginal predictive content of real-time diffusion indexes is significant for the preponderance of the individual models that we examine. The exception to this finding is the post “Great Recession” period. Second, forecast combinations that include only yield variables result in our most accurate predictions, for most sample periods and maturities. In this case, diffusion indexes do not have marginal predictive content for yields and do not seem to reflect unspanned risks. This points to the continuing usefulness of DNS and NSS models that are purely yield driven. Finally, we find that the use of fully revised macroeconomic data may have an important confounding effect upon results obtained when forecasting yields, as prior research has indicated that diffusion indexes are often useful for predicting yields when constructed using fully revised data, regardless of whether forecast combination is used, or not. Nevertheless, our findings also underscore the potential importance of using machine learning, data reduction, and shrinkage methods in contexts such as term structure modeling.  相似文献   

11.
The business environment is rapidly changing and some enterprises have announced unexpected restructurings, leading to stagnating stock prices and declines in their business performance. To prepare for calamity, it is becoming increasingly important for enterprise managers to use current financial data for short-term financial forecasting. Managers and investors are increasingly concerned with immediately and accurately forecasting firm financial crises using a limited amount of financial data. This work employs Z-Score value, which can be used to measure multinomial financial crisis index for forecasting, and utilizes Grey Markov forecasting for valuation. Based on the research results, the accuracy of the Grey Markov forecasting model is as expected, with excellent Z-Score, and the model can rapidly forecast the likelihood of firm financial crises. The study results can provide a good reference for government and financial institutions in examining financial risk, and for investors in selecting investment targets.  相似文献   

12.
The Netherlands Bureau for Economic Policy Analysis (CPB) uses a large macroeconomic model to create forecasts of various important macroeconomic variables. The outcomes of this model are usually filtered by experts, and it is the expert forecasts that are made available to the general public. In this paper we re-create the model forecasts for the period 1997-2008 and compare the expert forecasts with the pure model forecasts. Our key findings from the first time that this unique database has been analyzed are that (i) experts adjust upwards more often; (ii) expert adjustments are not autocorrelated, but their sizes do depend on the value of the model forecast; (iii) the CPB model forecasts are biased for a range of variables, but (iv) at the same time, the associated expert forecasts are more often unbiased; and that (v) expert forecasts are far more accurate than the model forecasts, particularly when the forecast horizon is short. In summary, the final CPB forecasts de-bias the model forecasts and lead to higher accuracies than the initial model forecasts.  相似文献   

13.
During the last three decades, integer‐valued autoregressive process of order p [or INAR(p)] based on different operators have been proposed as a natural, intuitive and maybe efficient model for integer‐valued time‐series data. However, this literature is surprisingly mute on the usefulness of the standard AR(p) process, which is otherwise meant for continuous‐valued time‐series data. In this paper, we attempt to explore the usefulness of the standard AR(p) model for obtaining coherent forecasting from integer‐valued time series. First, some advantages of this standard Box–Jenkins's type AR(p) process are discussed. We then carry out our some simulation experiments, which show the adequacy of the proposed method over the available alternatives. Our simulation results indicate that even when samples are generated from INAR(p) process, Box–Jenkins's model performs as good as the INAR(p) processes especially with respect to mean forecast. Two real data sets have been employed to study the expediency of the standard AR(p) model for integer‐valued time‐series data.  相似文献   

14.
The Treasury's forecast, published with the Autumn Statement, has been widely heralded as showing a surprisingly cheerful picture for next year as far as both output and inflation are concerned. In fact it is close to the forecast which we produced in October. Here we compare the two forecasts and then consider how our forecast is affected when we adopt the Treasury assumptions on asset sales and the exchange rate. We find that the Treasury is more optimistic than we are on investment and that holding the exchange rate - which is needed to produce the official inflation forecast - requires rather higher interest rates than we assumed in October and this widens the gap between our forecast for GDP and the Treasury's forecast.
We also consider how the government should respond to lower North Sea oil revenues. Taking a permanent income approach, we suggest that the PSBR should be allowed to rise by £2bn on this basis. The same approach, however, suggests that an extra £71/2bn of asset sales should be used to cut the PSBR not taxes. On balance therefore this analysis indicates that next year's PSBR target should be lowered by £1/2bn from the £71/2 bn contained in the 1985 MTFS.  相似文献   

15.
This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use log-score and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance.  相似文献   

16.
17.
Short-term forecasting of crime   总被引:2,自引:0,他引:2  
The major question investigated is whether it is possible to accurately forecast selected crimes 1 month ahead in small areas, such as police precincts. In a case study of Pittsburgh, PA, we contrast the forecast accuracy of univariate time series models with naïve methods commonly used by police. A major result, expected for the small-scale data of this problem, is that average crime count by precinct is the major determinant of forecast accuracy. A fixed-effects regression model of absolute percent forecast error shows that such counts need to be on the order of 30 or more to achieve accuracy of 20% absolute forecast error or less. A second major result is that practically any model-based forecasting approach is vastly more accurate than current police practices. Holt exponential smoothing with monthly seasonality estimated using city-wide data is the most accurate forecast model for precinct-level crime series.  相似文献   

18.
This paper sets out the basic structure of the bivariate generalization of Engle's ARCH model. Conditions which guarantee that the conditional covariance matrix is well defined are summarized, as are estimation and hypothesis testing.The process is used to combine forecasts where the weights are allowed to vary over time. Forecast errors from competing models are treated as a bivariate ARCH process so that the conditional covariance matrix adapts over time. At each point in time these conditional estimates of the variances and covariances are used to construct the optimal weights for combining the forecasts. Consequently, when one model is fitting well, its variance will be reduced and its weight will be increased.Two models of US inflation are constructed; one is a stylized monetarist model while the other is a mark-up model. The forecast errors are modeled as a simple bivariate ARCH process. Diagnostic tests reveal that this has overly restricted the parameterization of the covariance matrix. An approximation to the theoretically anticipated factor structure model is then estimated. The results in both cases show the weights varying over the sample period in moderately interpretable fashion.  相似文献   

19.
The New York Times model is a large-scale model which forecasts sales and earnings for the New York Times newspaper. Sturcturally, it is composed of two major blocks; a demand module, and a production, cost and revenue module. The demand module, the heart of the model, is a set of simultaneous nonlinear econometric equations which forecast physical volume, approximately 35 categories of advertising lines and 10 categories of circulation. The second block is recursive and contains roughly 300 equations, some of which are stochastic behavioral equations. This block converts the volume forecasts into paging, newsprint consumption, newsprint distribution and manning requirements. These physical flows are then monetized, using price and wage forecasts, to produce estimates of revenue, fixed and variable costs, and operating profit. This paper summarizes the development of the model, with emphasis on the advertising and circulation model. It should be noted that the structure of the model is constantly evolving. Consequently, emphasis is placed on the conceptual underpinnings of the model not on a detailed presentation of the current structure.  相似文献   

20.
Chou  Tsung-Yu  Liang  Gin-Shuh  Han  Tzeu-Chen 《Quality and Quantity》2011,45(6):1539-1550
This paper presented a Fuzzy Regression Forecasting Model (FRFM) to forecast demand by examining present international air cargo market. Accuracy is one of the most important concerns when dealing with forecasts. However, there is one problem that is often overlooked. That is, an accurate forecast model for one does not necessarily suit the other. This is mainly due to individual’s different perceptions toward their socioeconomic environment as well as their competitiveness when evaluating risk. Therefore people make divergent judgments toward various scenarios. Yet even when faced with the same challenge, distinctive responses are generated due to individual evaluations in their strengths and weaknesses. How to resolve these uncertainties and indefiniteness while accommodating individuality is the main purpose of constructing this FRFM. When forecasting air cargo volumes, uncertainty factors often cause deviation in estimations derived from traditional linear regression analysis. Aiming to enhance forecast accuracy by minimizing deviations, fuzzy regression analysis and linear regression analysis were integrated to reduce the residual resulted from these uncertain factors. The authors applied α-cut and Index of Optimism λ to achieve a more flexible and persuasive future volume forecast.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号