首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Humanitarian aid organizations are most known for their short-term emergency relief. While getting aid items to those in need can be challenging, long-term projects provide an opportunity for demand planning supported by forecasting methods. Based on standardized consumption data of the Operational Center Amsterdam of Médecins Sans Frontières (MSF-OCA) regarding nineteen longer-term aid projects and over 2000 medical items consumed in 2013, we describe and analyze the forecasting and order planning process. We find that several internal and external factors influence forecast and order planning performance, be it indirectly through demand volatility and safety markup. Moreover, we identify opportunities for further improvement for MSF-OCA, and for humanitarian logistics organizations in general.  相似文献   

2.
This paper uses real-time data to mimic real-time GDP forecasting activity. Through automatic searches for the best indicators for predicting GDP one and four steps ahead, we compare the out-of-sample forecasting performance of adaptive models using different data vintages, and produce three main findings. First, despite data revisions, the forecasting performance of models with indicators is better, but this advantage tends to vanish over longer forecasting horizons. Second, the practice of using fully updated datasets at the time the forecast is made (i.e., taking the best available measures of today's economic situation) does not appear to bring any effective improvement in forecasting ability: the first GDP release is predicted equally well by models using real-time data as by models using the latest available data. Third, although the first release is a rational forecast of GDP data after all statistical revisions have taken place, the forecast based on the latest available GDP data (i.e. the “temporarily best” measures) may be improved by combining preliminary official releases with one-step-ahead forecasts.  相似文献   

3.
《Socio》1987,21(4):239-243
This study is an empirical comparison of three rules for aggregating forecasts. The three combined forecasts evaluated are: a simple average forecast, a median forecast and a focus forecast. These combined forecasts are compared over four economic variables (housing starts, the index of industrial production, the unemployment rate and gross national product) using a set of previously published forecasts. The results indicate that an average forecast will not perform as well as previous studies indicate if all or most of the individual forecasts tend to over- or under-predict simultaneously. The median forecast also seems to be suspect in this case. There is little evidence to suggest that the median forecast is a viable alternative to the mean forecast. Focus forecasting, however, is found to perform well for all four variables. The evidence indicates that focus forecasting is a reasonable alternative to simple averaging.  相似文献   

4.
We use a macro‐finance model, incorporating macroeconomic and financial factors, to study the term premium in the US bond market. Estimating the model using Bayesian techniques, we find that a single factor explains most of the variation in bond risk premiums. Furthermore, the model‐implied risk premiums account for up to 40% of the variability of one‐ and two‐year excess returns. Using the model to decompose yield spreads into an expectations and a term premium component, we find that, although this decomposition does not seem important to forecast economic activity, it is crucial to forecast inflation for most forecasting horizons. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Factor models have been applied extensively for forecasting when high‐dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of literature indicating that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques that take into account the variable to be forecast when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecast while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well‐known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show good performance in improving efficiency compared to widely used factor methods in macroeconomic forecasting. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Decision making and planning under low levels of predictability   总被引:3,自引:0,他引:3  
This special section aims to demonstrate the limited predictability and high level of uncertainty in practically all important areas of our lives, and the implications of this. It summarizes the huge body of solid empirical evidence accumulated over the past several decades that proves the disastrous consequences of inaccurate forecasts in areas ranging from the economy and business to floods and medicine. The big problem is, however, that the great majority of people, decision and policy makers alike, still believe not only that accurate forecasting is possible, but also that uncertainty can be reliably assessed. Reality, however, shows otherwise, as this special section proves. This paper discusses forecasting accuracy and uncertainty, and distinguishes three distinct types of predictions: those relying on patterns for forecasting, those utilizing relationships as their basis, and those for which human judgment is the major determinant of the forecast. In addition, the major problems and challenges facing forecasters and the reasons why uncertainty cannot be assessed reliably are discussed using four large data sets. There is also a summary of the eleven papers included in this special section, as well as some concluding remarks emphasizing the need to be rational and realistic about our expectations and avoid the common delusions related to forecasting.  相似文献   

7.
A statistically optimal inference about agents' ex ante price expectations within the US broiler market is derived using futures prices of related commodities along with a quasi‐rational forecasting regression equation. The modelling approach, which builds on a Hamilton‐type framework, includes endogenous production and allows expected cash price to be decomposed into anticipated and unanticipated components. We therefore infer the relative importance of various informational sources in expectation formation. Results show that, in addition to the quasi‐rational forecast, the true supply shock, future prices, and ex post commodity price forecast errors have, at times, been influential in broiler producers' price expectations. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
How did DSGE model forecasts perform before, during and after the financial crisis, and what type of off-model information can improve the forecast accuracy? We tackle these questions by assessing the real-time forecast performance of a large DSGE model relative to statistical and judgmental benchmarks over the period from 2000 to 2013. The forecasting performances of all methods deteriorate substantially following the financial crisis. That is particularly evident for the DSGE model’s GDP forecasts, but augmenting the model with a measure of survey expectations made its GDP forecasts more accurate, which supports the idea that timely off-model information is particularly useful in times of financial distress.  相似文献   

9.
The heterogeneous expectations hypothesis: Some evidence from the lab   总被引:1,自引:0,他引:1  
This paper surveys learning-to-forecast experiments (LtFEs) with human subjects to test theories of expectations and learning. Subjects must repeatedly forecast a market price, whose realization is an aggregation of individual expectations. Emphasis is given to how individual forecasting rules interact at the micro-level and which structure they cocreate at the aggregate, macro-level. In particular, we focus on the question wether the evidence from laboratory experiments is consistent with heterogeneous expectations.  相似文献   

10.
How well can people use autocorrelation information when making judgmental forecasts? In Experiment 1, participants forecast from 12 series in which the autocorrelation varied within subjects. The participants showed a sensitivity to the degree of autocorrelation. However, their forecasts indicated that they implicitly assumed positive autocorrelation in uncorrelated time series. Experiments 2 and 2a used a one-shot single-trial between-subjects design and obtained similar results. Experiment 3 investigated the way in which the between-trials context influenced forecasting. The results showed that forecasts are affected by the characteristics of previous series, as well as those of the series from which forecasts are to be made. Our findings can be accommodated within an adaptive approach. Forecasters base their initial expectations of series characteristics on their past experience and modify these expectations in a pseudo-Bayesian manner on the basis of their analysis of those characteristics in the series to be forecast.  相似文献   

11.
We present new Bayesian methodology for consumer sales forecasting. Focusing on the multi-step-ahead forecasting of daily sales of many supermarket items, we adapt dynamic count mixture models for forecasting individual customer transactions, and introduce novel dynamic binary cascade models for predicting counts of items per transaction. These transaction–sales models can incorporate time-varying trends, seasonality, price, promotion, random effects and other outlet-specific predictors for individual items. Sequential Bayesian analysis involves fast, parallel filtering on sets of decoupled items, and is adaptable across items that may exhibit widely-varying characteristics. A multi-scale approach enables information to be shared across items with related patterns over time in order to improve prediction, while maintaining the scalability to many items. A motivating case study in many-item, multi-period, multi-step-ahead supermarket sales forecasting provides examples that demonstrate an improved forecast accuracy on multiple metrics, and illustrates the benefits of full probabilistic models for forecast accuracy evaluation and comparison.  相似文献   

12.
To forecast at several, say h, periods into the future, a modeller faces a choice between iterating one-step-ahead forecasts (the IMS technique), or directly modeling the relationship between observations separated by an h-period interval and using it for forecasting (DMS forecasting). It is known that structural breaks, unit-root non-stationarity and residual autocorrelation may improve DMS accuracy in finite samples, all of which occur when modelling the South African GDP over the period 1965–2000. This paper analyzes the forecasting properties of 779 multivariate and univariate models that combine different techniques of robust forecasting. We find strong evidence supporting the use of DMS and intercept correction, and attribute their superior forecasting performance to their robustness in the presence of breaks.  相似文献   

13.
When some of the regressors in a panel data model are correlated with the random individual effects, the random effect (RE) estimator becomes inconsistent while the fixed effect (FE) estimator is consistent. Depending on the various degree of such correlation, we can combine the RE estimator and FE estimator to form a combined estimator which can be better than each of the FE and RE estimators. In this paper, we are interested in whether the combined estimator may be used to form a combined forecast to improve upon the RE forecast (forecast made using the RE estimator) and the FE forecast (forecast using the FE estimator) in out-of-sample forecasting. Our simulation experiment shows that the combined forecast does dominate the FE forecast for all degrees of endogeneity in terms of mean squared forecast errors (MSFE), demonstrating that the theoretical results of the risk dominance for the in-sample estimation carry over to the out-of-sample forecasting. It also shows that the combined forecast can reduce MSFE relative to the RE forecast for moderate to large degrees of endogeneity and for large degrees of heterogeneity in individual effects.  相似文献   

14.
The performance of six classes of models in forecasting different types of economic series is evaluated in an extensive pseudo out‐of‐sample exercise. One of these forecasting models, regularized data‐rich model averaging (RDRMA), is new in the literature. The findings can be summarized in four points. First, RDRMA is difficult to beat in general and generates the best forecasts for real variables. This performance is attributed to the combination of regularization and model averaging, and it confirms that a smart handling of large data sets can lead to substantial improvements over univariate approaches. Second, the ARMA(1,1) model emerges as the best to forecast inflation changes in the short run, while RDRMA dominates at longer horizons. Third, the returns on the S&P 500 index are predictable by RDRMA at short horizons. Finally, the forecast accuracy and the optimal structure of the forecasting equations are quite unstable over time.  相似文献   

15.
The dynamic behavior of the term structure of interest rates is difficult to replicate with models, and even models with a proven track record of empirical performance have underperformed since the early 2000s. On the other hand, survey expectations can accurately predict yields, but they are typically not available for all maturities and/or forecast horizons. We show how survey expectations can be exploited to improve the accuracy of yield curve forecasts given by a base model. We do so by employing a flexible exponential tilting method that anchors the model forecasts to the survey expectations, and we develop a test to guide the choice of the anchoring points. The method implicitly incorporates into yield curve forecasts any information that survey participants have access to—such as information about the current state of the economy or forward‐looking information contained in monetary policy announcements—without the need to explicitly model it. We document that anchoring delivers large and significant gains in forecast accuracy relative to the class of models that are widely adopted by financial and policy institutions for forecasting the term structure of interest rates.  相似文献   

16.
This Briefing Paper is the last of a series of three about forecasting. In this one we examine our forecasting record; it complements the February paper in which we analysed the properties of our forecasting model in terms of the error bands attached to the central forecast.
There are many ways of measuring forecasting errors, and in the first part of this Briefing Paper we describe briefing how we have tackled the problem. (A more detailed analysis can be found in the Appendix.) In Part II we report and comment upon the errors in our forecasts of annual growth rates and show how our forecasting performance has improved over the years. In Part III we focus on quarterly forecasts up to 8 quarters ahead, and compare our forecasting errors with measurement errors in the oficial statistics; with the estimation errors built into our forecast equations; and with the stochastic model errors we reported last February. A brief summary of the main conclusions is given below.  相似文献   

17.
DIRECT MULTI-STEP ESTIMATION AND FORECASTING   总被引:1,自引:0,他引:1  
Abstract.  This paper surveys the literature on multi-step forecasting when the model or the estimation method focuses directly on the link between the forecast origin and the horizon of interest. Among diverse contributions, we show how the current consensual concepts have emerged. We present an exhaustive overview of the existing results, including a conclusive review of the circumstances favourable to direct multi-step forecasting, namely different forms of non-stationarity and appropriate model design. We also provide a unifying framework which allows us to analyse the sources of forecast errors and hence of accuracy improvements from direct over iterated multi-step forecasting.  相似文献   

18.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

19.
We find that it does, but choosing the right specification is not trivial. Based on an extensive forecast evaluation we document notable forecast instabilities for most simple Phillips curves. Euro area inflation was particularly hard to forecast in the run-up to the Economic and Monetary Union and after the sovereign debt crisis, when the trends—and, for the latter period, also the amount of slack—were harder to pin down. Yet, some specifications outperform a univariate benchmark and point to the following lessons: (i) the key type of time variation to consider is an inflation trend; (ii) a simple filter-based output gap works well, but after the Great Recession it is outperformed by endogenously estimated slack or by “institutional” estimates; (iii) external variables do not bring forecast gains; (iv) newer-generation Phillips curve models with several time-varying features are a promising avenue for forecasting; and (v) averaging over a wide range of modelling choices helps.  相似文献   

20.
This paper utilizes the conventional statistical tests associated with the rational expectations hypothesis so as to compare the relative accuracy of individual versus group forecasting within the organization. In order to maintain comparability between forecasting regimens the study employs like information sets for the two prediction methods. Using the rational expectations tests as criteria, the statistical results show group forecasts inferior to individually produced predictions These findings imply that group-produced forecasting accuracy may be hampered by the psychological interaction associated with consensus behavior. Conversely, we find forecasting accuracy improves when predictions are elicited from individuals in an isolated laboratory-like setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号