首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
It is shown empirically that mixed autoregressive moving average regression models with generalized autoregressive conditional heteroskedasticity (Reg-ARMA-GARCH models) can have multimodality in the likelihood that is caused by a dummy variable in the conditional mean. Maximum likelihood estimates at the local and global modes are investigated and turn out to be qualitatively different, leading to different model-based forecast intervals. In the simpler GARCH(p,q) regression model, we derive analytical conditions for bimodality of the corresponding likelihood. In that case, the likelihood is symmetrical around a local minimum. We propose a solution to avoid this bimodality.  相似文献   

2.
When a dependent variable y is related to present and past values of an exogenous variable x in a dynamic regression (distributed lag) model, and when x must be forecast in order to forecast y, necessary and sufficient conditions are derived in order for optimal forecasts of y to possess lower mean square error as a result of including x in the model, relative to forecasting y solely from its own past. The contribution to this forecast MSE reduction of non-invertibility in the lag distribution is assessed. Examples from econometrics and engineering are provided to illustrate the results.  相似文献   

3.
We introduce a new forecasting methodology, referred to as adaptive learning forecasting, that allows for both forecast averaging and forecast error learning. We analyze its theoretical properties and demonstrate that it provides a priori MSE improvements under certain conditions. The learning rate based on past forecast errors is shown to be non-linear. This methodology is of wide applicability and can provide MSE improvements even for the simplest benchmark models. We illustrate the method’s application using data on agricultural prices for several agricultural products, as well as on real GDP growth for several of the corresponding countries. The time series of agricultural prices are short and show an irregular cyclicality that can be linked to economic performance and productivity, and we consider a variety of forecasting models, both univariate and bivariate, that are linked to output and productivity. Our results support both the efficacy of the new method and the forecastability of agricultural prices.  相似文献   

4.
王继顺  王传斌 《物流科技》2010,33(10):37-41
利用自适应过滤预测法对连云港港口2008年前几个月货物吞吐量数据建立港口吞吐量预测模型,通过VBA编程实现Excel的宏按钮操作,按照预测误差方差最小原则交互实现迭代功能,选择对最小预测误差方差贡献最小的一期对应的权值作为最佳权值进行预测,获得了精确度较高的预测结果。  相似文献   

5.
We consider the implications for forecast accuracy of imposing unit roots and cointegrating restrictions in linear systems of I(1) variables in levels, differences, and cointegrated combinations. Asymptotic formulae are obtained for multi-step forecast error variances for each representation. Alternative measures of forecast accuracy are discussed. Finite sample behaviour in a bivariate model is studied by Monte Carlo using control variables. We also analyse the interaction between unit roots and cointegrating restrictions and intercepts in the DGP. Some of the issues are illustrated with an empirical example of forecasting the demand for M1 in the UK.  相似文献   

6.
文章探讨了电力系统负荷的组成、特点,在分析比较常用的预测方法优缺点的基础之上,采用了灰色.预测法与回归法相结合的方法建立了中长期负荷预测模型,把负荷预测工作分为2个部分:即用灰色预测法进行相关因素的预测和用回归法进行负荷预测。该模型充分利用了灰色预测法要求负荷数据少、不考虑分布规律、不考虑变化趋势、运算方便、易于检验等优点及回归法能够考虑到负荷所受的多种因素的特点,模型参数估许技术比较成熟,预测过程简单。  相似文献   

7.
The ambiguous return pattern for the PEGR (the ratio of the stock’s price/earnings to its estimated earnings growth rate) strategy has been documented in literature for the US stock markets. As stock prices and earnings per share (EPS) are objective data, earnings growth rate, however, is estimated by analyst whose method partial explains the PEGR vague return pattern. The purpose of this study is not to deny or substitute analysts’ estimation, but rather, to provide a simple and popular method, log-linear regression model, to forecast the earnings growth rate (G), and examine whether the typical PEGR effect, such as PER (price/earnings ratio) or PBR (price/book ratio) effect, exists by using our alternative estimation method. Our evidence indeed shows that returns on the lowest PEGR portfolio not only dominate over all higher PEGR portfolios, but also beat the market with stochastic dominance (SD) analysis, which is consistent with our prediction. Our results, at least, imply that using the log-linear regression model to construct the PEGR-sorted portfolios can benefit investors and the model is also a good choice for analysts in their forecasting.  相似文献   

8.
文章探讨了电力系统负荷的组成、特点,在分析比较常用的预测方法优缺点的基础之上,采用了灰色预测法与回归法相结合的方法建立了中长期负荷预测模型,把负荷预测工作分为2个部分:即用灰色预测法进行相关因素的预测和用回归法进行负荷预测。该模型充分利用了灰色预测法要求负荷数据少、不考虑分布规律、不考虑变化趋势、运算方便、易于检验等优点及回归法能够考虑到负荷所受的多种因素的特点,模型参数估计技术比较成熟,预测过程简单。  相似文献   

9.
Recently, Patton and Timmermann (2012) proposed a more powerful kind of forecast efficiency regression at multiple horizons, and showed that it provides evidence against the efficiency of the Fed’s Greenbook forecasts. I use their forecast efficiency evaluation to propose a method for adjusting the Greenbook forecasts. Using this method in a real-time out-of-sample forecasting exercise, I find that it provides modest improvements in the accuracies of the forecasts for the GDP deflator and CPI, but not for other variables. The improvements are statistically significant in some cases, with magnitudes of up to 18% in root mean square prediction error.  相似文献   

10.
There are three approaches for the estimation of the distribution function D(r) of distance to the nearest neighbour of a stationary point process: the border method, the Hanisch method and the Kaplan-Meier approach. The corresponding estimators and some modifications are compared with respect to bias and mean squared error (mse). Simulations for Poisson, cluster and hard-core processes show that the classical border estimator has good properties; still better is the Hanisch estimator. Typically, mse depends on r, having small values for small and large r and a maximum in between. The mse is not reduced if the exact intensity λ (if known) or intensity estimators from larger windows are built in the estimators of D(r); in contrast, the intensity estimator should have the same precision as that of λ D(r). In the case of replicated estimation from more than one window the best way of pooling the subwindow estimates is averaging by weights which are proportional to squared point numbers.  相似文献   

11.
A number of recent studies in the economics literature have focused on the usefulness of factor models in the context of prediction using “big data” (see Bai and Ng, 2008; Dufour and Stevanovic, 2010; Forni, Hallin, Lippi, & Reichlin, 2000; Forni et al., 2005; Kim and Swanson, 2014a; Stock and Watson, 2002b, 2006, 2012, and the references cited therein). We add to this literature by analyzing whether “big data” are useful for modelling low frequency macroeconomic variables, such as unemployment, inflation and GDP. In particular, we analyze the predictive benefits associated with the use of principal component analysis (PCA), independent component analysis (ICA), and sparse principal component analysis (SPCA). We also evaluate machine learning, variable selection and shrinkage methods, including bagging, boosting, ridge regression, least angle regression, the elastic net, and the non-negative garotte. Our approach is to carry out a forecasting “horse-race” using prediction models that are constructed based on a variety of model specification approaches, factor estimation methods, and data windowing methods, in the context of predicting 11 macroeconomic variables that are relevant to monetary policy assessment. In many instances, we find that various of our benchmark models, including autoregressive (AR) models, AR models with exogenous variables, and (Bayesian) model averaging, do not dominate specifications based on factor-type dimension reduction combined with various machine learning, variable selection, and shrinkage methods (called “combination” models). We find that forecast combination methods are mean square forecast error (MSFE) “best” for only three variables out of 11 for a forecast horizon of h=1, and for four variables when h=3 or 12. In addition, non-PCA type factor estimation methods yield MSFE-best predictions for nine variables out of 11 for h=1, although PCA dominates at longer horizons. Interestingly, we also find evidence of the usefulness of combination models for approximately half of our variables when h>1. Most importantly, we present strong new evidence of the usefulness of factor-based dimension reduction when utilizing “big data” for macroeconometric forecasting.  相似文献   

12.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

13.
The main objective of this paper it to model the dynamic relationship between global averaged measures of Total Radiative Forcing (RTF) and surface temperature, measured by the Global Temperature Anomaly (GTA), and then use this model to forecast the GTA. The analysis utilizes the Data-Based Mechanistic (DBM) approach to the modelling and forecasting where, in this application, the unobserved component model includes a novel hybrid Box-Jenkins stochastic model in which the relationship between RTF and GTA is based on a continuous time transfer function (differential equation) model. This model then provides the basis for short term, inter-annual to decadal, forecasting of the GTA, using a transfer function form of the Kalman Filter, which produces a good prediction of the ‘pause’ or ‘levelling’ in the temperature rise over the period 2000 to 2011. This derives in part from the effects of a quasi-periodic component that is modelled and forecast by a Dynamic Harmonic Regression (DHR) relationship and is shown to be correlated with the Atlantic Multidecadal Oscillation (AMO) index.  相似文献   

14.
Feedback orientation reflects an individual difference in one's receptivity to feedback. We present the results of a meta-analysis of the feedback orientation literature. Based on k = 46 independent samples, representing n = 12,478 workers, meta-analytic results suggest that feedback orientation is positively related to learning goal orientation (rc = 0.39), job satisfaction (rc = 0.33), work performance (rc = 0.35), and feedback seeking (rc = 0.43). Meta-analytic regression and dominance analysis was used to tease apart how related informal feedback constructs (i.e., feedback seeking, feedback environment, & feedback orientation) aid in the prediction of outcomes, above and beyond two established predictors of job attitudes and work performance: role clarity and leader-member exchange. We also present an interactive exploratory data analysis tool to aid in developing future research questions regarding the connection between informal feedback constructs and work outcomes.  相似文献   

15.
When heteroscedasticity of the variances of disturbances in a regression model is suspected, we perform a preliminary test for homoscedasticity prior to estimation of regression coefficients. According to the result of the pre-test, we use either the ordinary least squares estimator or the two-stage Aitken estimator (2SAE). In this paper, using orthonormal regressors, we derive the mean square error (MSE) of the pre-test estimator and show that the 2SAE is inadmissible when the MSE is used as a criterion. Further, we seek the optimal critical value of the pre-test in the sense of minimizing the average relative risk which is based on the MSE.  相似文献   

16.
《Journal of econometrics》1999,88(1):151-191
A parametric test for r versus r−1 cointegrating vectors is developed. The test exploits the fact that in a system of n I(1) variates the rth principal component is I(0) under the null but I(1) under the alternative. The statistic is parametric, is constructed using simple regression methods applied to principal components, follows a standard χ2 distribution and does not require normalisation restrictions on the cointegrating vectors. A Monte Carlo investigation indicates that providing the lag length in the pre-whitening procedure is chosen by means of nested significance tests, the test has good size and power properties in small samples.  相似文献   

17.
Results from cointegration tests clearly suggest that TFP and the relative price of investment (RPI) are not cointegrated. Evidence on the alternative possibility that they may nonetheless contain a common I(1) component generating long-horizon co-variation between them crucially depends on the fact that (i) structural breaks are, or are not allowed for, and (ii) the precise nature and timing of such breaks. Not allowing for breaks, evidence points towards the presence of a common component inducing positive long-horizon covariation, which is compatible with the notion that the technology transforming consumption goods into investment goods is non-linear, and the RPI is also impacted upon by neutral shocks. Allowing for breaks, evidence suggests that long-horizon covariation is either nil or negative.Assuming, for illustrative purposes, that the two series contain a common component inducing negative long-horizon covariation, evidence based on structural VARs shows that this common shock (i) plays an important role in macroeconomic fluctuations, explaining sizeable fractions of the forecast error variance of main macro series, and (ii) generates ‘disinflationary booms’, characterized by transitory increases in hours, and decreases in inflation.  相似文献   

18.
Volatility forecasts aim to measure future risk and they are key inputs for financial analysis. In this study, we forecast the realized variance as an observable measure of volatility for several major international stock market indices and accounted for the different predictive information present in jump, continuous, and option-implied variance components. We allowed for volatility spillovers in different stock markets by using a multivariate modeling approach. We used heterogeneous autoregressive (HAR)-type models to obtain the forecasts. Based an out-of-sample forecast study, we show that: (i) including option-implied variances in the HAR model substantially improves the forecast accuracy, (ii) lasso-based lag selection methods do not outperform the parsimonious day-week-month lag structure of the HAR model, and (iii) cross-market spillover effects embedded in the multivariate HAR model have long-term forecasting power.  相似文献   

19.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

20.
The purpose of this study is to investigate the efficacy of combining forecasting models in order to improve earnings per share forecasts. The utility industry is used because regulation causes the accounting procedures of the firms to be more homogeneous than other industries. Three types of forecasting models which use historical data are compared to the forecasts of the Value Line Investment Survey. It is found that predictions of the analysts of Value Line are more accurate than the predictions of the models which use only historical data. However the study also shows that forecasts of earnings per share can be improved by combining the predictions of Value Line with the predictions of other models. Specifically, the forecast error is the least when the Value Line forecast is combined with the forecast of the Brown-Rozeff ARIMA model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号