首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Demand forecasting is critical to sales and operations planning (S&OP), but the effects of sales promotions can be difficult to forecast. Typically, a baseline statistical forecast is judgmentally adjusted on receipt of information from different departments. However, much of this information either has no predictive value or its value is unknown. Research into base rate discounting has suggested that such information may distract forecasters from the average uplift and reduce accuracy. This has been investigated in situations in which forecasters were able to adjust the statistical forecasts for promotions via a forecasting support system (FSS). In two ecologically valid experiments, forecasters were provided with the mean level of promotion uplift, a baseline statistical forecast, and quantitative and qualitative information. However, the forecasters were distracted from the base rate and misinterpreted the information available to them. These findings have important implications for the design of organizational S&OP processes, and for the implementation of FSSs.  相似文献   

2.
This paper explores the issues associated with adapting forecasting techniques used by manufacturers to produce accurate forecasts for retail sales. A case study is presented that is developed using a retail situation because retailers often view their sales forecasting problems as being very different from a manufacturer's problems. Sales volumes are dramatically impacted by competitor promotional actions, discounts, store promotions and weather. Finally, consumption holidays like Christmas, Easter, Mother's day, have a large impact on sales as well as back to school shopping. The findings in this paper indicate that forecasting retail sales can be accomplished with a high degree of accuracy.  相似文献   

3.
Forecasting the outcome of outbreaks as early and as accurately as possible is crucial for decision-making and policy implementations. A significant challenge faced by forecasters is that not all outbreaks and epidemics turn into pandemics, making the prediction of their severity difficult. At the same time, the decisions made to enforce lockdowns and other mitigating interventions versus their socioeconomic consequences are not only hard to make, but also highly uncertain. The majority of modeling approaches to outbreaks, epidemics, and pandemics take an epidemiological approach that considers biological and disease processes. In this paper, we accept the limitations of forecasting to predict the long-term trajectory of an outbreak, and instead, we propose a statistical, time series approach to modelling and predicting the short-term behavior of COVID-19. Our model assumes a multiplicative trend, aiming to capture the continuation of the two variables we predict (global confirmed cases and deaths) as well as their uncertainty. We present the timeline of producing and evaluating 10-day-ahead forecasts over a period of four months. Our simple model offers competitive forecast accuracy and estimates of uncertainty that are useful and practically relevant.  相似文献   

4.
Exchange rate forecasting is hard and the seminal result of Meese and Rogoff [Meese, R., Rogoff, K., 1983. Empirical exchange rate models of the seventies: Do they fit out of sample? Journal of International Economics 14, 3–24] that the exchange rate is well approximated by a driftless random walk, at least for prediction purposes, still stands despite much effort at constructing other forecasting models. However, in several other macro and financial forecasting applications, researchers in recent years have considered methods for forecasting that effectively combine the information in a large number of time series. In this paper, I apply one such method for pooling forecasts from several different models, Bayesian Model Averaging, to the problem of pseudo out-of-sample exchange rate predictions. For most currency–horizon pairs, the Bayesian Model Averaging forecasts using a sufficiently high degree of shrinkage, give slightly smaller out-of-sample mean square prediction error than the random walk benchmark. The forecasts generated by this model averaging methodology are however very close to, but not identical to, those from the random walk forecast.  相似文献   

5.
The objective of the paper is to compare the informational efficiency of five macroeconometric and one statistical quarterly forecasting models. The results suggest that the forecasters inefficiently utilize readily available economic information. The qualitative effect for a particular information variable is the same across all forecasters exhibiting inefficiency. Further, the magnitude on coefficients of significant information variables are quite close. In particular, real GNP forecasts appear to not fully incorporate information about lagged M 1 growth and lagged changes in housing starts. Deflator forecasts can be improved by more fully specifying the degree of slackness in the economy as captured by capacity utilization and changes in the labor market.  相似文献   

6.
This paper reviews the research literature on forecasting retail demand. We begin by introducing the forecasting problems that retailers face, from the strategic to the operational, as sales are aggregated over products to stores and to the company overall. Aggregated forecasting supports strategic decisions on location. Product-level forecasts usually relate to operational decisions at the store level. The factors that influence demand, and in particular promotional information, add considerable complexity, so that forecasters potentially face the dimensionality problem of too many variables and too little data. The paper goes on to evaluate evidence on comparative forecasting accuracy. Although causal models outperform simple benchmarks, adequate evidence on machine learning methods has not yet accumulated. Methods for forecasting new products are examined separately, with little evidence being found on the effectiveness of the various approaches. The paper concludes by describing company forecasting practices, offering conclusions as to both research gaps and barriers to improved practice.  相似文献   

7.
Deterministic forecasts (as opposed to ensemble or probabilistic forecasts) issued by numerical weather prediction (NWP) models require post-processing. Such corrective procedure can be viewed as a form of calibration. It is well known that, based on different objective functions, e.g., minimizing the mean square error or the mean absolute error, the calibrated forecasts have different impacts on verification. In this regard, this paper investigates how a calibration directive can affect various aspects of forecast quality outlined in the Murphy–Winkler distribution-oriented verification framework. It is argued that the correlation coefficient is the best measure for the potential performance of NWP forecast verification when linear calibration is involved, because (1) it is not affected by the directive of linear calibration, (2) it can be used to compute the skill score of the linearly calibrated forecasts, and (3) it can avoid the potential deficiency of using squared error to rank forecasts. Since no single error metric can fully represent all aspects of forecast quality, forecasters need to understand the trade-offs between different calibration strategies. To echo the increasing need to bridge atmospheric sciences, renewable energy engineering, and power system engineering, as to move toward the grand goal of carbon neutrality, this paper first provides a brief introduction to solar forecasting, and then revolves its discussion around a solar forecasting case study, such that the readers of this journal can gain further understanding on the subject and thus potentially contribute to it.  相似文献   

8.
This paper analyses the performance of GDP growth and inflation forecasts for 25 transition countries between 1994 and 2007, as provided by 13 international institutions, including multilateral, private and academic forecasters. The empirical results show that there is a positive correlation between the number of forecasters covering a given country and the forecast accuracy. Simple combined forecasts are shown to be unbiased and more accurate than most of the individual forecasters, although also inefficient. However, only a few institutions provide efficient and unbiased forecasts, with just one out of 13 forecasters providing both unbiased and efficient forecasts of both GDP growth and inflation in the observed period. The directional analysis shows a correct forecast of the change in the forecast indicator in over two thirds of cases. However, the eventual outcome is within the range of available forecasts in less than half of the cases, with more than 40% of outcomes for GDP growth above the highest forecast. Encouragingly, forecasts are shown to be improving over time and becoming more accurate with the increase in the number of forecasting institutions – forecast accuracy measured by mean absolute error improves by 0.3 percentage points for growth and by 0.2 percentage points for inflation for each additional institution providing forecasts.  相似文献   

9.
This paper presents empirical evidence on how judgmental adjustments affect the accuracy of macroeconomic density forecasts. Judgment is defined as the difference between professional forecasters’ densities and the forecast densities from statistical models. Using entropic tilting, we evaluate whether judgments about the mean, variance and skew improve the accuracy of density forecasts for UK output growth and inflation. We find that not all judgmental adjustments help. Judgments about point forecasts tend to improve density forecast accuracy at short horizons and at times of heightened macroeconomic uncertainty. Judgments about the variance hinder at short horizons, but can improve tail risk forecasts at longer horizons. Judgments about skew in general take value away, with gains seen only for longer horizon output growth forecasts when statistical models took longer to learn that downside risks had reduced with the end of the Great Recession. Overall, density forecasts from statistical models prove hard to beat.  相似文献   

10.
This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets and Wouters (2012), estimated on euro area data. It investigates the extent to which the inclusion of forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated in the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model, an AR(1) model, a sample mean and a random walk.  相似文献   

11.
We use numerous high-frequency transaction data sets to evaluate the forecasting performances of several dynamic ordinal-response time series models with generalized autoregressive conditional heteroscedasticity (GARCH). The specifications account for three components: leverage effects, in-mean effects and moving average error terms. We estimate the model parameters by developing Markov chain Monte Carlo algorithms. Our empirical analysis shows that the proposed ordinal-response GARCH models achieve better point and density forecasts than standard benchmarks.  相似文献   

12.
Consumer credit and consumption forecasts   总被引:1,自引:0,他引:1  
Recent advances in the theory of consumer behavior indicate that consumption may exhibit non-linear dynamics characterized by occasional surges. Building upon them, and taking explicitly into account the forward-looking nature of consumption, this paper argues that rising consumer debt can signal such surges, as well as the consumption underprediction which will occur if they are not taken sufficiently into account in forecasting. This insight is tested with and strongly confirmed by the Organization of Economic Cooperation and Developments forecasts for the USA. The results should be of interest not only to professional forecasters and policy-makers, but also to theoretical economists and econometricians who study non-linear dynamic models.  相似文献   

13.
As the internet’s footprint continues to expand, cybersecurity is becoming a major concern for both governments and the private sector. One such cybersecurity issue relates to data integrity attacks. This paper focuses on the power industry, where the forecasting processes rely heavily on the quality of the data. Data integrity attacks are expected to harm the performances of forecasting systems, which will have a major impact on both the financial bottom line of power companies and the resilience of power grids. This paper reveals the effect of data integrity attacks on the accuracy of four representative load forecasting models (multiple linear regression, support vector regression, artificial neural networks, and fuzzy interaction regression). We begin by simulating some data integrity attacks through the random injection of some multipliers that follow a normal or uniform distribution into the load series. Then, the four aforementioned load forecasting models are used to generate one-year-ahead ex post point forecasts in order to provide a comparison of their forecast errors. The results show that the support vector regression model is most robust, followed closely by the multiple linear regression model, while the fuzzy interaction regression model is the least robust of the four. Nevertheless, all four models fail to provide satisfying forecasts when the scale of the data integrity attacks becomes large. This presents a serious challenge to both load forecasters and the broader forecasting community: the generation of accurate forecasts under data integrity attacks. We construct our case study using the publicly-available data from Global Energy Forecasting Competition 2012. At the end, we also offer an overview of potential research topics for future studies.  相似文献   

14.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

15.
Mixed frequency Bayesian vector autoregressions (MF-BVARs) allow forecasters to incorporate large numbers of time series that are observed at different intervals into forecasts of economic activity. This paper benchmarks the performances of MF-BVARs for forecasting U.S. real gross domestic product growth against surveys of professional forecasters and documents the influences of certain specification choices. We find that a medium–large MF-BVAR provides an attractive alternative to surveys at the medium-term forecast horizons that are of interest to central bankers and private sector analysts. Furthermore, we demonstrate that certain specification choices influence its performance strongly, such as model size, prior selection mechanisms, and modeling in levels versus growth rates.  相似文献   

16.
Macroeconomic forecasters typically report a single estimate per time period for each macroeconomic variable. But they rarely provide consumers of forecasts with information about the degree of confidence there is in the forecast or the likely range of dispersion of the actual outcome relative to the conditional forecast. Partly this is due to the non-linearity of the model and thus the cost of producing standard error bands. But there is also the problem that a mechanical stochastic simulation may misrepresent the degree of forecast uncertainty because forecasters use non-model information to produce a forecast which is more precise, at least for the immediate future, than the model alone. In this paper we propose a method for generating standard error bands which gives a truer reflection of the forecaster's uncertainty.  相似文献   

17.
This paper considers two problems of interpreting forecasting competition error statistics. The first problem is concerned with the importance of linking the error measure (loss function) used in evaluating a forecasting model with the loss function used in estimating the model. It is argued that because the variety of uses of any single forecast, such matching is impractical. Secondly, there is little evidence that matching would have any impact on comparative forecast performance, however measured. As a consequence the results of forecasting competitions are not affected by this problem. The second problem is concerned with the interpreting performance, when evaluated through M(ean) S(quare) E(rror). The authors show that in the Makridakis Competition, good MSE performance is solely due to performance on a small number of the 1001 series, and arises because of the effects of scale. They conclude that comparisons of forecasting accuracy based on MSE are subject to major problems of interpretation.  相似文献   

18.
Forecasters typically evaluate the performances of new forecasting methods by exploiting data from past forecasting competitions. Over the years, numerous studies have based their conclusions on such datasets, with mis-performing methods being unlikely to receive any further attention. However, it has been reported that these datasets might not be indicative, as they display many limitations. Since forecasting research is driven somewhat by data from forecasting competitions, it becomes vital to determine whether they are indeed representative of the reality or whether forecasters tend to over-fit their methods on a random sample of series. This paper uses the data from M4 as proportionate to the real world and compares its properties with those of past datasets commonly used in the literature as benchmarks in order to provide evidence on that question. The results show that many popular benchmarks of the past may indeed deviate from reality, and ways forward are discussed in response.  相似文献   

19.
We examined automatic feature identification and graphical support in rule-based expert systems for forecasting. The rule-based expert forecasting system (RBEFS) includes predefined rules to automatically identify features of a time series and selects the extrapolation method to be used. The system can also integrate managerial judgment using a graphical interface that allows a user to view alternate extrapolation methods two at a time. The use of the RBEFS led to a significant improvement in accuracy compared to equal-weight combinations of forecasts. Further improvement were achieved with the user interface. For 6-year ahead ex ante forecasts, the rule-based expert forecasting system has a median absolute percentage error (MdAPE) 15% less than that of equally weighted combined forecasts and a 33% improvement over the random walk. The user adjusted forecasts had a MdAPE 20% less than that of the expert system. The results of the system are also compared to those of an earlier rule-based expert system which required human judgments about some features of the time series data. The results of the comparison of the two rule-based expert systems showed no significant differences between them.  相似文献   

20.
The years following the Great Recession were challenging for forecasters. Unlike other deep downturns, this recession was not followed by a swift recovery, but instead generated a sizable and persistent output gap that was not accompanied by deflation as a traditional Phillips curve relationship would have predicted. Moreover, the zero lower bound and unconventional monetary policy generated an unprecedented policy environment. We document the actual real-time forecasting performance of the New York Fed dynamic stochastic general equilibrium (DSGE) model during this period and explain the results using the pseudo real-time forecasting performance results from a battery of DSGE models. We find the New York Fed DSGE model’s forecasting accuracy to be comparable to that of private forecasters, and notably better for output growth than the median forecasts from the FOMC’s Summary of Economic Projections. The model’s financial frictions were key in obtaining these results, as they implied a slow recovery following the financial crisis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号