首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
宋凡  杨磊 《价值工程》2012,(27):72-74
根据大丰港的历史数据,采用时间序列的回归分析方法对大丰港货物吞吐量进行预测研究。在回归分析法中,通过比较,选择指数模型、高次多项式模型以及支持向量机方法分别对大丰港的货物吞吐量进行模拟预测。对指数模型、多项式模型、支持向量机三种方法的预测结果进行比较,并结合大丰港的实际情况,最终选择使用支持向量机法给出大丰港2012—2016年的货物吞吐量预测值。  相似文献   

2.
This paper aims to demonstrate a possible aggregation gain in predicting future aggregates under a practical assumption of model misspecification. Empirical analysis of a number of economic time series suggests that the use of the disaggregate model is not always preferred over the aggregate model in predicting future aggregates, in terms of an out-of-sample prediction root-mean-square error criterion. One possible justification of this interesting phenomena is model misspecification. In particular, if the model fitted to the disaggregate series is misspecified (i.e., not the true data generating mechanism), then the forecast made by a misspecified model is not always the most efficient. This opens up an opportunity for the aggregate model to perform better. It will be of interest to find out when the aggregate model helps. In this paper, we study a framework where the underlying disaggregate series has a periodic structure. We derive and compare the efficiency loss in linear prediction of future aggregates using the adapted disaggregate model and aggregate model. Some scenarios for aggregation gain to occur are identified. Numerical results show that the aggregate model helps over a fairly large region in the parameter space of the periodic model that we studied.  相似文献   

3.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

4.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

5.
An important statistical application is the problem of determining an appropriate set of input variables for modelling a response variable. In such an application, candidate models are characterized by which input variables are included in the mean structure. A reasonable approach to gauging the propriety of a candidate model is to define a discrepancy function through the prediction error associated with this model. An optimal set of input variables is then determined by searching for the candidate model that minimizes the prediction error. In this paper, we focus on a Bayesian approach to estimating a discrepancy function based on prediction error in linear regression. It is shown how this approach provides an informative method for quantifying model selection uncertainty.  相似文献   

6.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

7.
杨晓东 《价值工程》2014,(24):34-35
由于采用传统的研磨和抛光方法很难将KDP这样的软脆单晶材料加工出超光滑的表面,因此,国内外目前多采用单点金刚石切削的方式。为了实现切削参数优选以及达到加工前对加工表面质量进行预测和控制的目的,本文除了采用回归分析的方法建立了KDP晶体超精密切削加工表面粗糙度的预测模型外,还通过优化设计软件对模型切削参数进行了优化。经过实验结果表明,采用回归分析方法建立的KDP晶体超精密切削加工表面粗糙度的预测模型是可靠有效的。  相似文献   

8.
In the empirical analysis of panel data the Breusch–Pagan (BP) statistic has become a standard tool to infer on unobserved heterogeneity over the cross-section. Put differently, the test statistic is central to discriminate between the pooled regression and the random effects model. Conditional versions of the test statistic have been provided to immunize inference on unobserved heterogeneity against random time effects or patterns of spatial error correlation. Panel data models with spatially correlated error terms are typically set out under the presumption of some known adjacency matrix parameterizing the correlation structure up to a scaling factor. This paper delivers a bootstrap scheme to generate critical values for the BP statistic allowing robust inference under misspecification of the adjacency matrix. Moreover, asymptotic results are derived for the case of a finite cross-section and infinite time dimension. Finite sample simulations show that misspecification of spatial covariance features could lead to large size distortions, while the robust bootstrap procedure retains asymptotic validity.  相似文献   

9.
Interpolation and backdating with a large information set   总被引:1,自引:0,他引:1  
Existing methods for data interpolation or backdating are either univariate or based on a very limited number of series, due to data and computing constraints that were binding until the recent past. Nowadays large datasets are readily available, and models with hundreds of parameters are easily estimated. We model these large datasets with a factor model, and develop an interpolation method that exploits the estimated factors as an efficient summary of all available information. The method is compared with existing standard approaches from a theoretical point of view, by means of Monte Carlo simulations, and also when applied to actual macroeconomic series. The results indicate that our method is rather robust to model misspecification, although traditional multivariate methods also work well while univariate approaches are systematically outperformed. When interpolated series are subsequently used in econometric analyses, biases can emerge, but they are smaller with multivariate approaches, including factor-based ones.  相似文献   

10.
Time series data are often subject to statistical adjustments needed to increase accuracy, replace missing values and/or facilitate data analysis. The most common adjustments made to original observations are signal extraction (e.g. smoothing), benchmarking, interpolation and extrapolation. In this article, we present a general dynamic stochastic regression model, from which most of these adjustments can be performed, and prove that the resulting generalized least square estimator is minimum variance linear unbiased. We extend current methods to include those cases where the signal follows a mixed model (deterministic and stochastic components) and the errors are autocorrelated and heteroscedastic.  相似文献   

11.
On the selection of forecasting models   总被引:5,自引:0,他引:5  
It is standard in applied work to select forecasting models by ranking candidate models by their prediction mean squared error (PMSE) in simulated out-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to mimimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We show that under suitable conditions the IC method will be consistent for the best approximating model among the candidate models. In contrast, under standard assumptions the SOOS method, whether based on recursive or rolling regressions, will select overparameterized models with positive probability, resulting in excessive finite-sample PMSEs.  相似文献   

12.
Compatibility testing determines whether two series, say a sub-annual and an annual series, both of which are subject to sampling errors, can be considered suitable for benchmarking. We derive statistical tests and discuss the issues with their implementation. The results are illustrated using the artificial series from Denton (1971) and two empirical examples. A practical way of implementing the tests is also presented.  相似文献   

13.
The performance of portfolio model can be improved by introducing stock prediction based on machine learning methods. However, the prediction error is inevitable, which may bring losses to investors. To limit the losses, a common strategy is diversification, which involves buying low-correlation stocks and spreading the funds across different assets. In this paper, a diversified portfolio selection method based on stock prediction is proposed, which includes two stages. To be specific, the purpose of the first stage is to select diversified stocks with high predicted returns, where the returns are predicted by machine learning methods, i.e. random forest (RF), support vector regression (SVR), long short-term memory networks (LSTM), extreme learning machine (ELM) and back propagation neural network (BPNN), and the diversification level is measured by Pearson correlation coefficient. In the second stage, the predictive results are incorporated into a modified mean–variance (MMV) model to determine the proportion of each asset. Using China Securities 100 Index component stocks as study sample, the empirical results demonstrate that the RF+MMV model achieves better results than similar counterparts and market index in terms of return and return–risk metrics.  相似文献   

14.
《Journal of econometrics》2003,117(1):123-150
This paper derives several lagrange multiplier (LM) tests for the panel data regression model with spatial error correlation. These tests draw upon two strands of earlier work. The first is the LM tests for the spatial error correlation model discussed in Anselin (Spatial Econometrics: Methods and Models, Kluwer Academic Publishers, Dordrecht; Rao's score test in spatial econometrics, J. Statist. Plann. Inference 97 (2001) 113) and Anselin et al. (Regional Sci. Urban Econom. 26 (1996) 77), and the second is the LM tests for the error component panel data model discussed in Breusch and Pagan (Rev. Econom. Stud. 47(1980) 239) and Baltagi et al. (J. Econometrics 54 (1992) 95). The idea is to allow for both spatial error correlation as well as random region effects in the panel data regression model and to test for their joint significance. Additionally, this paper derives conditional LM tests, which test for random regional effects given the presence of spatial error correlation. Also, spatial error correlation given the presence of random regional effects. These conditional LM tests are an alternative to the one-directional LM tests that test for random regional effects ignoring the presence of spatial error correlation or the one-directional LM tests for spatial error correlation ignoring the presence of random regional effects. We argue that these joint and conditional LM tests guard against possible misspecification. Extensive Monte Carlo experiments are conducted to study the performance of these LM tests as well as the corresponding likelihood ratio tests.  相似文献   

15.
We use a broad-range set of inflation models and pseudo out-of-sample forecasts to assess their predictive ability among 14 emerging market economies (EMEs) at different horizons (1–12 quarters ahead) with quarterly data over the period 1980Q1-2016Q4. We find, in general, that a simple arithmetic average of the current and three previous observations (the RW-AO model) consistently outperforms its standard competitors—based on the root mean squared prediction error (RMSPE) and on the accuracy in predicting the direction of change. These include conventional models based on domestic factors, existing open-economy Phillips curve-based specifications, factor-augmented models, and time-varying parameter models. Often, the RMSPE and directional accuracy gains of the RW-AO model are shown to be statistically significant. Our results are robust to forecast combinations, intercept corrections, alternative transformations of the target variable, different lag structures, and additional tests of (conditional) predictability. We argue that the RW-AO model is successful among EMEs because it is a straightforward method to downweight later data, which is a useful strategy when there are unknown structural breaks and model misspecification.  相似文献   

16.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

17.
18.
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.  相似文献   

19.
The forecast of the real estate market is an important part of studying the Chinese economic market. Most existing methods have strict requirements on input variables and are complex in parameter estimation. To obtain better prediction results, a modified Holt's exponential smoothing (MHES) method was proposed to predict the housing price by using historical data. Unlike the traditional exponential smoothing models, MHES sets different weights on historical data and the smoothing parameters depend on the sample size. Meanwhile, the proposed MHES incorporates the whale optimization algorithm (WOA) to obtain the optimal parameters. Housing price data from Kunming, Changchun, Xuzhou and Handan were used to test the performance of the model. The housing prices results of four cities indicate that the proposed method has a smaller prediction error and shorter computation time than that of other traditional models. Therefore, WOA-MHES can be applied efficiently to housing price forecasting and can be a reliable tool for market investors and policy makers.  相似文献   

20.
This paper deals with models for the duration of an event that are misspecified by the neglect of random multiplicative heterogeneity in the hazard function. This type of misspecification has been widely discussed in the literature [e.g., Heckman and Singer (1982), Lancaster and Nickell (1980)], but no study of its effect on maximum likelihood estimators has been given. This paper aims to provide such a study with particular reference to the Weibull regression model which is by far the most frequently used parametric model [e.g., Heckman and Borjas (1980), Lancaster (1979)]. In this paper we define generalised errors and residuals in the sense of Cox and Snell (1968, 1971) and show how their use materially simplifies the analysis of both true and misspecified duration models. We show that multiplicative heterogeneity in the hazard of the Weibull model has two errors in variables interpretations. We give the exact asymptotic inconsistency of M.L. estimation in the Weibull model and give a general expression for the inconsistency of M.L. estimators due to neglected heterogeneity for any duration model to O(σ2), where σ2 is the variance of the error term. We also discuss the information matrix test for neglected heterogeneity in duration models and consider its behaviour when σ2>0.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号