首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Standard selection criteria for forecasting models focus on information that is calculated for each series independently, disregarding the general tendencies and performance of the candidate models. In this paper, we propose a new way to perform statistical model selection and model combination that incorporates the base rates of the candidate forecasting models, which are then revised so that the per-series information is taken into account. We examine two schemes that are based on the precision and sensitivity information from the contingency table of the base rates. We apply our approach on pools of either exponential smoothing or ARMA models, considering both simulated and real time series, and show that our schemes work better than standard statistical benchmarks. We test the significance and sensitivity of our results, discuss the connection of our approach to other cross-learning approaches, and offer insights regarding implications for theory and practice.  相似文献   

2.
This paper presents a careful investigation of the three popular calibration weighting methods: (i) generalised regression; (ii) generalised exponential tilting and (iii) generalised pseudo empirical likelihood, with a major focus on computational aspects of the methods and some empirical evidences on calibrated weights. We also propose a simple weight trimming method for range‐restricted calibration. The finite sample behaviour of the weights obtained by the three calibration weighting methods and the effectiveness of the proposed weight trimming method are examined through limited simulation studies.  相似文献   

3.
We investigate the added value of combining density forecasts focused on a specific region of support. We develop forecast combination schemes that assign weights to individual predictive densities based on the censored likelihood scoring rule and the continuous ranked probability scoring rule (CRPS) and compare these to weighting schemes based on the log score and the equally weighted scheme. We apply this approach in the context of measuring downside risk in equity markets using recently developed volatility models, including HEAVY, realized GARCH and GAS models, applied to daily returns on the S&P 500, DJIA, FTSE and Nikkei indexes from 2000 until 2013. The results show that combined density forecasts based on optimizing the censored likelihood scoring rule significantly outperform pooling based on equal weights, optimizing the CRPS or log scoring rule. In addition, 99% Value‐at‐Risk estimates improve when weights are based on the censored likelihood scoring rule.  相似文献   

4.
This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for one step ahead forecasts. Under continuous breaks, our approach largely recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for optimal weights in models with a single regressor, and asymptotically valid weights for models with more than one regressor. It is shown that in these cases the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain, a forecasting procedure based on robust optimal weights is proposed. The relative performance of our proposed approach is investigated using Monte Carlo experiments and an empirical application to forecasting real GDP using the yield curve across nine industrial economies.  相似文献   

5.
Interference about conditional independence in relation to log linear models are discussed for contingency tables. The parameters and likelihood ratios for a log linear model with a dependent variable are shown to be identical to those for a multivariate model. An approximaate method of calculating log likelihood ratios, even when all dimensions of the table have more than two levels (no binary variables) is derived. The implications for sociological “causal” models are discussed.  相似文献   

6.
In this paper we investigate a spatial Durbin error model with finite distributed lags and consider the Bayesian MCMC estimation of the model with a smoothness prior. We study also the corresponding Bayesian model selection procedure for the spatial Durbin error model, the spatial autoregressive model and the matrix exponential spatial specification model. We derive expressions of the marginal likelihood of the three models, which greatly simplify the model selection procedure. Simulation results suggest that the Bayesian estimates of high order spatial distributed lag coefficients are more precise than the maximum likelihood estimates. When the data is generated with a general declining pattern or a unimodal pattern for lag coefficients, the spatial Durbin error model can better capture the pattern than the SAR and the MESS models in most cases. We apply the procedure to study the effect of right to work (RTW) laws on manufacturing employment.  相似文献   

7.
Multinomial and ordered Logit models are quantitative techniques which are used in a range of disciplines nowadays. When applying these techniques, practitioners usually select a single model using either information-based criteria or pretesting. In this paper, we consider the alternative strategy of combining models rather than selecting a single model. Our strategy of weight choice for the candidate models is based on the minimization of a plug-in estimator of the asymptotic squared error risk of the model average estimator. Theoretical justifications of this model averaging strategy are provided, and a Monte Carlo study shows that the forecasts produced by the proposed strategy are often more accurate than those produced by other common model selection and model averaging strategies, especially when the regressors are only mildly to moderately correlated and the true model contains few zero coefficients. An empirical example based on credit rating data is used to illustrate the proposed method. To reduce the computational burden, we also consider a model screening step that eliminates some of the very poor models before averaging.  相似文献   

8.
The recent housing market boom and bust in the United States illustrates that real estate returns are characterized by short-term positive serial correlation and long-term mean reversion to fundamental values. We develop an econometric model that includes these two components, but with weights that vary dynamically through time depending on recent forecasting performances. The smooth transition weighting mechanism can assign more weight to positive serial correlation in boom times, and more weight to reversal to fundamental values during downturns. We estimate the model with US national house price index data. In-sample, the switching mechanism significantly improves the fit of the model. In an out-of-sample forecasting assessment the model performs better than competing benchmark models.  相似文献   

9.
We investigate the finite sample properties of a large number of estimators for the average treatment effect on the treated that are suitable when adjustment for observed covariates is required, like inverse probability weighting, kernel and other variants of matching, as well as different parametric models. The simulation design used is based on real data usually employed for the evaluation of labour market programmes in Germany. We vary several dimensions of the design that are of practical importance, like sample size, the type of the outcome variable, and aspects of the selection process. We find that trimming individual observations with too much weight as well as the choice of tuning parameters are important for all estimators. A conclusion from our simulations is that a particular radius matching estimator combined with regression performs best overall, in particular when robustness to misspecifications of the propensity score and different types of outcome variables is considered an important property.  相似文献   

10.
This paper reviews a spreadsheet-based forecasting approach which a process industry manufacturer developed and implemented to link annual corporate forecasts with its manufacturing/distribution operations. First, we consider how this forecasting system supports overall production planning and why it must be compatible with corporate forecasts. We then review the results of substantial testing of variations on the Winters three-parameter exponential smoothing model on 28 actual product family time series. In particular, we evaluate whether the use of damping parameters improves forecast accuracy. The paper concludes that a Winters four-parameter model (i.e. the standard Winters three-parameter model augmented by a fourth parameter to damp the trend) provides the most accurate forecasts of the models evaluated. Our application confirms the fact that there are situations where the use of damped trend parameters in short-run exponential smoothing based forecasting models is beneficial.  相似文献   

11.
A time-varying probability density function, or the corresponding cumulative distribution function, may be estimated nonparametrically by using a kernel and weighting the observations using schemes derived from time series modelling. The parameters, including the bandwidth, may be estimated by maximum likelihood or cross-validation. Diagnostic checks may be carried out directly on residuals given by the predictive cumulative distribution function. Since tracking the distribution is only viable if it changes relatively slowly, the technique may need to be combined with a filter for scale and/or location. The methods are applied to data on the NASDAQ index and the Hong Kong and Korean stock market indices.  相似文献   

12.
I examine the effects of insurance status and managed care on hospitalization spells, and develop a new approach for sample selection problems in parametric duration models. MLE of the Flexible Parametric Selection (FPS) model does not require numerical integration or simulation techniques. I discuss application to the exponential, Weibull, log‐logistic and gamma duration models. Applying the model to the hospitalization data indicates that the FPS model may be preferred even in cases in which other parametric approaches are available. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
The well-developed ETS (ExponenTial Smoothing, or Error, Trend, Seasonality) method incorporates a family of exponential smoothing models in state space representation and is widely used for automatic forecasting. The existing ETS method uses information criteria for model selection by choosing an optimal model with the smallest information criterion among all models fitted to a given time series. The ETS method under such a model selection scheme suffers from computational complexity when applied to large-scale time series data. To tackle this issue, we propose an efficient approach to ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series. We provide a simulation study to show the model selection ability of the proposed approach on simulated data. We evaluate our approach on the widely used M4 forecasting competition dataset in terms of both point forecasts and prediction intervals. To demonstrate the practical value of our method, we showcase the performance improvements from our approach on a monthly hospital dataset.  相似文献   

14.
15.
This paper is concerned with the search for locally optimal designs when the observations of the response variable arise from a weighted distribution in the exponential family. Locally optimal designs are derived for regression models in which the response follows a weighted version of Normal, Gamma, Inverse Gaussian, Poisson or Binomial distributions. Some conditions are given under which the optimal designs for the weighted and original (non-weighted) distributions are the same. An efficiency study is performed to find out the behavior of the D-optimal designs for the original distribution when they are used to estimate models with weighted distributions.  相似文献   

16.
Accelerated life testing of products is used to get information quickly on their lifetime distributions. This paper discusses a k -stage step-stress accelerated life test under progressive type I censoring with grouped data. An exponential lifetime distribution with mean life that is a log-linear function of stress is considered. A cumulative exposure model is also assumed. We use the maximum likelihood method to obtain the estimators of the model parameters. The methods for obtaining the optimum test plan are investigated using the variance-optimality and D-optimality criteria. Some numerical studies are discussed to illustrate the proposed criteria.  相似文献   

17.
Exponential smooth transition autoregressive (ESTAR) models are widely used in the international finance literature, particularly for the modelling of real exchange rates. We show that the exponential function is ill‐suited as a regime weighting function because of two undesirable properties. Firstly, it can be well approximated by a quadratic function in the threshold variable whenever the transition function parameter γ, which governs the shape of the function, is ‘small’. This leads to an identification problem with respect to the transition function parameter and the slope vector, as both enter as a product into the conditional mean of the model. Secondly, the exponential regime weighting function can behave like an indicator function (or dummy variable) for very large values of γ. This has the effect of ‘spuriously overfitting’ a small number of observations around the location parameter μ. We show that both of these effects lead to estimation problems in ESTAR models. We illustrate this by means of an empirical replication of a widely cited study, as well as a simulation exercise.  相似文献   

18.
We consider the properties of weighted linear combinations of prediction models, or linear pools, evaluated using the log predictive scoring rule. Although exactly one model has limiting posterior probability, an optimal linear combination typically includes several models with positive weights. We derive several interesting results: for example, a model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with six prediction models. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools.  相似文献   

19.
This paper discusses pooling versus model selection for nowcasting with large datasets in the presence of model uncertainty. In practice, nowcasting a low‐frequency variable with a large number of high‐frequency indicators should account for at least two data irregularities: (i) unbalanced data with missing observations at the end of the sample due to publication delays; and (ii) different sampling frequencies of the data. Two model classes suited in this context are factor models based on large datasets and mixed‐data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst other things, the factor estimation method and the number of factors, lag length and indicator selection. Thus there are many sources of misspecification when selecting a particular model, and an alternative would be pooling over a large set of different model specifications. We evaluate the relative performance of pooling and model selection for nowcasting quarterly GDP for six large industrialized countries. We find that the nowcast performance of single models varies considerably over time, in line with the forecasting literature. Model selection based on sequential application of information criteria can outperform benchmarks. However, the results highly depend on the selection method chosen. In contrast, pooling of nowcast models provides an overall very stable nowcast performance over time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
This paper proposes two new weighting schemes that average forecasts based on different estimation windows in order to account for possible structural change. The first scheme weights the forecasts according to the values of reversed ordered CUSUM (ROC) test statistics, while the second weighting method simply assigns heavier weights to forecasts that use more recent information. Simulation results show that, when structural breaks are present, forecasts based on the first weighting scheme outperform those based on a procedure that simply uses ROC tests to choose and forecast from a single post-break estimation window. Combination forecasts based on our second weighting scheme outperform equally weighted combination forecasts. An empirical application based on a NAIRU Phillips curve model for the G7 countries illustrates these findings, and also shows that combination forecasts can outperform the random walk forecasting model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号