首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study compares forecasts of US international message telephone service (IMTS) traffic using several relative mean squared error statistics. The forecasts are obtained from time-series extrapolation, univariate autoregressive integrated moving average (ARIMA), error correction and vector autoregressive models. The models are estimated on annual US IMTS outgoing traffic data for six US–Asia bilateral markets for the period 1964 to 1993. No single approach provides best forecasts. However, forecast evaluation statistics indicate that econometric models generally outperform the alternatives.  相似文献   

2.
This study evaluates the forecasting accuracy of six alternative econometric models in the context of the demand for international tourism in Denmark. These econometric models are special cases of a general autoregressive distributed lag specification. In addition, the forecasting accuracy of two univariate time series models is evaluated for benchmark comparison purposes. The forecasting competition is based on annual data on inbound tourism to Denmark. Individual models are estimated for each of the six major origin countries over the period 1969–93 and forecasting performance is assessed using data for the period 1994–97. Rankings of these forecasting models over different time horizons are established based on mean absolute percentage error and root mean square percentage error.  相似文献   

3.
Empirical evidence has shown that seasonal patterns of tourism demand and the effects of various influencing factors on this demand tend to change over time. To forecast future tourism demand accurately requires appropriate modelling of these changes. Based on the structural time series model (STSM) and the time-varying parameter (TVP) regression approach, this study develops the causal STSM further by introducing TVP estimation of the explanatory variable coefficients, and therefore combines the merits of the STSM and TVP models. This new model, the TVP-STSM, is employed for modelling and forecasting quarterly tourist arrivals to Hong Kong from four key source markets: China, South Korea, the UK and the USA. The empirical results show that the TVP-STSM outperforms all seven competitors, including the basic and causal STSMs and the TVP model for one- to four-quarter-ahead ex post forecasts and one-quarter-ahead ex ante forecasts.  相似文献   

4.
Recurrent Neural Networks (RNNs) have become competitive forecasting methods, as most notably shown in the winning method of the recent M4 competition. However, established statistical models such as exponential smoothing (ETS) and the autoregressive integrated moving average (ARIMA) gain their popularity not only from their high accuracy, but also because they are suitable for non-expert users in that they are robust, efficient, and automatic. In these areas, RNNs have still a long way to go. We present an extensive empirical study and an open-source software framework of existing RNN architectures for forecasting, and we develop guidelines and best practices for their use. For example, we conclude that RNNs are capable of modelling seasonality directly if the series in the dataset possess homogeneous seasonal patterns; otherwise, we recommend a deseasonalisation step. Comparisons against ETS and ARIMA demonstrate that (semi-) automatic RNN models are not silver bullets, but they are nevertheless competitive alternatives in many situations.  相似文献   

5.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

6.
This article studies a simple, coherent approach for identifying and estimating error‐correcting vector autoregressive moving average (EC‐VARMA) models. Canonical correlation analysis is implemented for both determining the cointegrating rank, using a strongly consistent method, and identifying the short‐run VARMA dynamics, using the scalar component methodology. Finite‐sample performance is evaluated via Monte Carlo simulations and the approach is applied to modelling and forecasting US interest rates. The results reveal that EC‐VARMA models generate significantly more accurate out‐of‐sample forecasts than vector error correction models (VECMs), especially for short horizons. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Higher dimensional multivariate time series models suffer from the problem of over-parametrisation which impairs their forecasting performance. Starting from such unrestricted vector autoregressive models the paper discusses two ways to cope with this difficulty. The first approach reduces the number of free parameters by applying a subset modelling strategy. The second approach takes a Bayesian point of view by formulating ‘priors’ which are then combined with sample information, but leaving the original specification unaltered. Using Austrian quarterly macroeconomic time series a comparative study is undertaken by running alternative forecasting exercises. Both methods improve out-of-sample forecasting performance substantially at the cost of some bias in ex-post simulations. Comparing the ex-ante predictions of the two approaches, the former does better at short horizons whereas the latter gains as the forecast horizon lengthens.  相似文献   

8.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

9.
We examine the econometric implications of the decision problem faced by a profit/utility-maximizing lender operating in a simple “double-binary” environment, where the two actions available are “approve” or “reject”, and the two states of the world are “pay back” or “default”. In practice, such decisions are often made by applying a fixed cutoff to the maximum likelihood estimate of a parametric model of the default probability. Following (Elliott and Lieli, 2007), we argue that this practice might contradict the lender’s economic objective and, using German loan data, we illustrate the use of “context-specific” cutoffs and an estimation method derived directly from the lender’s problem. We also provide a brief discussion of how to incorporate legal constraints, such as the prohibition of disparate treatment of potential borrowers, into the lender’s problem.  相似文献   

10.
Accurate solar forecasts are necessary to improve the integration of solar renewables into the energy grid. In recent years, numerous methods have been developed for predicting the solar irradiance or the output of solar renewables. By definition, a forecast is uncertain. Thus, the models developed predict the mean and the associated uncertainty. Comparisons are therefore necessary and useful for assessing the skill and accuracy of these new methods in the field of solar energy.The aim of this paper is to present a comparison of various models that provide probabilistic forecasts of the solar irradiance within a very strict framework. Indeed, we consider focusing on intraday forecasts, with lead times ranging from 1 to 6 h. The models selected use only endogenous inputs for generating the forecasts. In other words, the only inputs of the models are the past solar irradiance data. In this context, the most common way of generating the forecasts is to combine point forecasting methods with probabilistic approaches in order to provide prediction intervals for the solar irradiance forecasts. For this task, we selected from the literature three point forecasting models (recursive autoregressive and moving average (ARMA), coupled autoregressive and dynamical system (CARDS), and neural network (NN)), and seven methods for assessing the distribution of their error (linear model in quantile regression (LMQR), weighted quantile regression (WQR), quantile regression neural network (QRNN), recursive generalized autoregressive conditional heteroskedasticity (GARCHrls), sieve bootstrap (SB), quantile regression forest (QRF), and gradient boosting decision trees (GBDT)), leading to a comparison of 20 combinations of models.None of the model combinations clearly outperform the others; nevertheless, some trends emerge from the comparison. First, the use of the clear sky index ensures the accuracy of the forecasts. This derived parameter permits time series to be deseasonalized with missing data, and is also a good explanatory variable of the distribution of the forecasting errors. Second, regardless of the point forecasting method used, linear models in quantile regression, weighted quantile regression and gradient boosting decision trees are able to forecast the prediction intervals accurately.  相似文献   

11.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

12.
In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem into a linear model selection and estimation problem. To this end, we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, which is well known to time series econometricians, and the Marginal Bridge Estimator, which is better known to statisticians. The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 2007–2009. The forecast accuracy is measured using the root mean square forecast error. Hypothesis testing is also used to compare the performances of the different techniques.  相似文献   

13.
This paper exploits cross-sectional variation at the level of U.S. counties to generate real-time forecasts for the 2020 U.S. presidential election. The forecasting models are trained on data covering the period 2000–2016, using high-dimensional variable selection techniques. Our county-based approach contrasts the literature that focuses on national and state level data but uses longer time periods to train their models. The paper reports forecasts of popular and electoral college vote outcomes and provides a detailed ex-post evaluation of the forecasts released in real time before the election. It is shown that all of these forecasts outperform autoregressive benchmarks. A pooled national model using One-Covariate-at-a-time-Multiple-Testing (OCMT) variable selection significantly outperformed all models in forecasting the U.S. mainland national vote share and electoral college outcomes (forecasting 236 electoral votes for the Republican party compared to 232 realized). This paper also shows that key determinants of voting outcomes at the county level include incumbency effects, unemployment, poverty, educational attainment, house price changes, and international competitiveness. The results are also supportive of myopic voting: economic fluctuations realized a few months before the election tend to be more powerful predictors of voting outcomes than their long-horizon analogs.  相似文献   

14.
Forecasting residential burglary   总被引:1,自引:0,他引:1  
Following the work of Dhiri et al. [Modelling and predicting property crime trends. Home Office Research Study 198 (1999). London: HMSO] at the Home Office predicting recorded burglary and theft for England and Wales to the year 2001, econometric and time series models were constructed for predicting recorded residential burglary to the same date. A comparison between the Home Office econometric predictions and the less alarming econometric predictions made in this paper identified the differences as stemming from the particular set of variables used in the models. However, the Home Office and one of our econometric models adopted an error correction form which appeared to be the main reason why these models predicted increases in burglary. To identify the role of error correction in these models, time series models were built for the purpose of comparison, all of which predicted substantially lower numbers of residential burglaries. The years 1998–2001 appeared to offer an opportunity to test the utility of error correction models in the analysis of criminal behaviour. Subsequent to the forecasting exercise carried out in 1999, recorded outcomes have materialised, which point to the superiority of time series models compared to error correction models for the short-run forecasting of property crime. This result calls into question the concept of a long-run equilibrium relationship for crime.  相似文献   

15.
Structural vs. atheoretic approaches to econometrics   总被引:1,自引:0,他引:1  
In this paper I attempt to lay out the sources of conflict between the so-called “structural” and “experimentalist” camps in econometrics. Critics of the structural approach often assert that it produces results that rely on too many assumptions to be credible, and that the experimentalist approach provides an alternative that relies on fewer assumptions. Here, I argue that this is a false dichotomy. All econometric work relies heavily on a priori assumptions. The main difference between structural and experimental (or “atheoretic”) approaches is not in the number of assumptions but the extent to which they are made explicit.  相似文献   

16.
This paper develops methods for stochastic search variable selection (currently popular with regression and vector autoregressive models) for vector error correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations “just above” and “just below” the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function—the specification errors—as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.  相似文献   

18.
We test for the presence of time‐varying parameters (TVP) in the long‐run dynamics of energy prices for oil, natural gas and coal, within a standard class of mean‐reverting models. We also propose residual‐based diagnostic tests and examine out‐of‐sample forecasts. In‐sample LR tests support the TVP model for coal and gas but not for oil, though companion diagnostics suggest that the model is too restrictive to conclusively fit the data. Out‐of‐sample analysis suggests a random‐walk specification for oil price, and TVP models for both real‐time forecasting in the case of gas and long‐run forecasting in the case of coal. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
The task of airline network management is to develop new flight schedule variants and evaluate them in terms of expected passenger demand and revenue. Given the industry's trend towards global cooperation, this is especially important when evaluating the potential synergies with alliance partners. From the econometric point of view, this task represents a discrete choice modelling problem in which one has to account for a large number of dependent alternatives. In this paper we discuss the applicability of recently proposed approaches and introduce a new multinomial probit specification designed for the airline network management task. The superior performance of the new model is demonstrated in a real‐world application using airline bookings data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Forecast combination through dimension reduction techniques   总被引:2,自引:0,他引:2  
This paper considers several methods of producing a single forecast from several individual ones. We compare “standard” but hard to beat combination schemes (such as the average of forecasts at each period, or consensus forecast and OLS-based combination schemes) with more sophisticated alternatives that involve dimension reduction techniques. Specifically, we consider principal components, dynamic factor models, partial least squares and sliced inverse regression.Our source of forecasts is the Survey of Professional Forecasters, which provides forecasts for the main US macroeconomic aggregates. The forecasting results show that partial least squares, principal component regression and factor analysis have similar performances (better than the usual benchmark models), but sliced inverse regression shows an extreme behavior (performs either very well or very poorly).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号