首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study empirically examines two issues related to forecasting annual accounting earnings. The first issue studied is the improvement in forecasts of annual earnings that can be obtained by including information about dividend payout along with the past earnings series in forecasting models. The second issue deals with the comparative ability of quarterly earnings time series models and annual earnings time series models to predict annual earnings. The results of this study indicate that considerable improvement in predictive ability can be obtained by expanding the information set to include the dividend payout ratio series. The empirical analysis also indicates that time series models developed using annual earnings generate more accurate predictions of annual earnings than do models developed using quarterly earnings.  相似文献   

2.
风洞试验标模是一种评估风洞试验准度和验证CFD算法的通用校准模型。本文归纳分析了北大四洋公约组织AGARD系列、法国ONERAM系列为代表的国外风洞试验标模发展情况,阐述了我国DBM、GBM、HSCM风洞标模系列的发展现状,探讨了建立和完善风洞试验标模体系的一些问题,意在为国内风洞和试验技术发展提供参考。  相似文献   

3.
The bond default risk premium, measured by the spread between higher and lower grade bond returns, is often estimated with univariate time series procedures and used as an input in financial models. In this paper, time series properties of the historical default risk premium are analyzed and forecasting results from univariate time series models are compared. An autoregressive model with an overreaction component provides the best statistical fit for the bond default risk premium series. A random walk model exhibits the worst fit. The findings are robust over a variety of model specifications and measurement choices. For all forms of the time series process the univariate time series models explain a small percentage of the variation in the default risk premium, raising questions about traditional approaches to estimating the expected default risk premium.  相似文献   

4.
We consider Bayesian inference about the dimensionality in the multivariate reduced rank regression framework, which encompasses several models such as MANOVA, factor analysis and cointegration models for multiple time series. The fractional Bayes approach is used to derive a closed form approximation to the posterior distribution of the dimensionality and some asymptotic properties of the approximation are proved. Finite sample properties are studied by simulation and the method is applied to growth curve data and cointegrated multivariate time series.  相似文献   

5.
This paper gives an overview about the sixteen papers included in this special issue. The papers in this special issue cover a wide range of topics. Such topics include discussing a class of tests for correlation, estimation of realized volatility, modeling time series and continuous-time models with long-range dependence, estimation and specification testing of time series models, estimation in a factor model with high-dimensional problems, finite-sample examination of quasi-maximum likelihood estimation in an autoregressive conditional duration model, and estimation in a dynamic additive quantile model.  相似文献   

6.
We compare alternative forecast pooling methods and 58 forecasts from linear, time‐varying and non‐linear models, using a very large dataset of about 500 macroeconomic variables for the countries in the European Monetary Union. On average, combination methods work well but single non‐linear models can outperform them for several series. The performance of pooled forecasts, and of non‐linear models, improves when focusing on a subset of unstable series, but the gains are minor. Finally, on average over the EMU countries, the pooled forecasts behave well for industrial production growth, unemployment and inflation, but they are often beaten by non‐linear models for each country and variable.  相似文献   

7.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

8.
In this paper we introduce a class of tentatively plausible, fixed-coefficient models of money demand and evaluate their forecast performance. When these models are reestimated allowing all coefficients to vary over time, the forecasting performance improves dramatically. Aside from offering insights about improved methods of analyzing time series data, the most promising direct use for point estimates derived from time-varying coefficients is as an aid in calibrating proposed models of the kind discussed here.  相似文献   

9.
This paper uses three classes of univariate time series techniques (ARIMA type models, switching regression models, and state-space/structural time series models) to forecast, on an ex post basis, the downturn in U.S. housing prices starting around 2006. The performance of the techniques is compared within each class and across classes by out-of-sample forecasts for a number of different forecast points prior to and during the downturn. Most forecasting models are able to predict a downturn in future home prices by mid 2006. Some state-space models can predict an impending downturn as early as June 2005. State-space/structural time series models tend to produce the most accurate forecasts, although they are not necessarily the models with the best in-sample fit.  相似文献   

10.
我国物价总水平波动路径研究   总被引:1,自引:0,他引:1  
李春林  李冬连 《价值工程》2011,30(16):142-144
利用时间序列分析方法研究了PPI和CPI的结构突变特征,并通过建立时间序列模型刻画了我国物价水平(CPI)的波动路径。结果表明,CPI序列在2003年7月、2007年5月以及2008年7月三个时间点上存在结构突变现象,从而将CPI的运行过程分为四个阶段。另外,PPI的结构突变时点一般较CPI滞后1-2个月,这说明CPI触发了PPI的上涨,而PPI又会给予CPI以上涨支撑。最后,分别对以上四个子样本区的CPI序列波动性进行了建模研究,结果表明1997年1月至2003年6月间的CPI波动具有条件异方差性,可由GARCH(1,1)模型来描述。而其他阶段的CPI波动均可由AR(1)模型描述。  相似文献   

11.
Predictions of aggregate transport mode split for inter-city trips are derived from a disaggregate model of travel demand. A series of tests are performed to assess the limitations of the prediction methodology for disaggregate models, and it is shown that disaggregate models are capable of predictions across diverse travel situations. The disaggregate model predictions are compared with predictions derived from aggregate models such as are currently used in urban transportation planning, and it is shown that disaggregate models based on much smaller data sets predict better than aggregate models while requiring no more information about the predicted population.  相似文献   

12.
Prudent statistical analysis of correlated data requires accounting for the correlation among the measurements. Specifying a form for the covariance matrix of the data could reduce the high number of parameters of the covariance and increase efficiency of the inferences about the regression parameters. Motivated by the success of ordinary, partial and inverse correlograms in identifying parsimonious models for stationary time series, we introduce generalizations of these plots for nonstationary data. Their roles in detecting heterogeneity and correlation of the data and identifying parsimonious models for the covariance matrix are illuminated using a longitudinal dataset. Decomposition of a covariance matrix into "variance" and "dependence" components provides the necessary ingredients for the proposed graphs. This amounts to replacing a 3-D correlation plot by a pair of 2-D plots, providing complementary information about dependence and heterogeneity. Models identified and fitted using the variance-correlation decomposition of a covariance matrix are not guaranteed to be positive definite, but those using the modified Cholesky decomposition are. Limitations of our graphical diagnostics for general multivariate data where the measurements are not (time-) ordered are discussed.  相似文献   

13.
This paper examines the relationship between dynamic structural econometric models (SEM) and time series (TS) models. It extends the work of others by suggesting a reconciliation of SEM and TS models based on classical linear parameter restrictions in regression models rather than on time series methods. The paper demonstrates that in a number of common economic contexts there exist sets of plausible restrictions on the stochastic properties of the disturbances and on the dynamic adjustment processes in a SEM such that familiar structural models take on the form of univariate TS models. Consequently, it is argued that TS models should not be arbitrarily dismissed as being devoid of economic content.  相似文献   

14.
This study uses an artificial neural network model to forecast quarterly accounting earnings for a sample of 296 corporations trading on the New York stock exchange. The resulting forecast errors are shown to be significantly larger (smaller) than those generated by the parsimonious Brown-Rozeff and Griffin-Watts (Foster) linear time series models, bringing into question the potential usefulness of neural network models in forecasting quarterly accounting earnings. This study confirms the conjecture by Chatfield and Hill et al. that neural network models are context sensitive. In particular, this study shows that neural network models are not necessarily superior to linear time series models even when the data are financial, seasonal and non-linear.  相似文献   

15.
This paper reviews research issues in modeling panels of time series. Examples of this type of data are annually observed macroeconomic indicators for all countries in the world, daily returns on the individual stocks listed in the S&P500, and the sales records of all items in a retail store. A panel of time series concerns the case where the cross‐sectional dimension and the time dimension are large. Often, there is no a priori reason to select a few series or to aggregate the series over the cross‐sectional dimension. The use of, for example, a vector autoregression or other types of multivariate models then becomes cumbersome. Panel models and associated estimation techniques are more useful. Due to the large time dimension, one should however incorporate the time‐series features. And, the models should not have too many parameters to facilitate interpretation. This paper discusses representation, estimation and inference of relevant models and discusses recently proposed modeling approaches that explicitly aim to meet these requirements. The paper concludes with some reflections on the usefulness of large data sets. These concern sample selection issues and the notion that more detail also requires more complex models.  相似文献   

16.
The familiar concept of cointegration enables us to determine whether or not there is a long-run relationship between two integrated time series. However, this may not capture short-run effects such as seasonality. Two series which display different seasonal effects can still be cointegrated. Seasonality may arise independently of the long-run relationship between two time series or, indeed, the long-run relationship may itself be seasonal. The market for recycled ferrous scrap displays these features: the US and UK scrap prices are cointegrated, yet the local markets exhibit different forms of seasonality. The paper addresses the problem of using both cointegrating and seasonal relationships in forecasting time series through the use of periodic transfer function models. We consider the problems of testing for cointegration between series with differing seasonal patterns and develop a periodic transfer function model for the US and UK scrap markets. Forecast comparisons with other time series models suggest that forecasting efficiency may be improved by allowing for periodicity but that such improvement is by no means guaranteed. The correct specification of the periodic component of the model is critical for forecast accuracy.  相似文献   

17.
Space–time autoregressive (STAR) models, introduced by Cliff and Ord [Spatial autocorrelation (1973) Pioneer, London] are successfully applied in many areas of science, particularly when there is prior information about spatial dependence. These models have significantly fewer parameters than vector autoregressive models, where all information about spatial and time dependence is deduced from the data. A more flexible class of models, generalized STAR models, has been introduced in Borovkova et al. [Proc. 17th Int. Workshop Stat. Model. (2002), Chania, Greece] where the model parameters are allowed to vary per location. This paper establishes strong consistency and asymptotic normality of the least squares estimator in generalized STAR models. These results are obtained under minimal conditions on the sequence of innovations, which are assumed to form a martingale difference array. We investigate the quality of the normal approximation for finite samples by means of a numerical simulation study, and apply a generalized STAR model to a multivariate time series of monthly tea production in west Java, Indonesia.  相似文献   

18.
On compensation for risk aversion and skewness affection in wages   总被引:2,自引:0,他引:2  
This paper presents extensive empirical testing of the hypothesis that greater post-schooling earnings risk requires higher expected returns. Expanding on this notion, on the basis of utility theory, we predict that workers not only care about risk but also about the skewness in the distribution of the compensation paid: workers exhibit risk aversion and skewness affection. To test these hypotheses, this paper carefully develops various measures of risk and skewness by occupational/educational classification of the worker and finds supportive evidence: for men, wages rise with occupational earnings variance and decrease with skewness, for women only the negative effect of skewness is significant.  相似文献   

19.
Statistical process control (SPC) has evolved beyond its classical applications in manufacturing to monitoring economic and social phenomena. This extension has required the consideration of autocorrelated and possibly non-stationary time series. Less attention has been paid to the possibility that the variance of the process may also change over time. In this paper we use the innovations state space modeling framework to develop conditionally heteroscedastic models. We provide examples to show that the incorrect use of homoscedastic models may lead to erroneous decisions about the nature of the process.  相似文献   

20.
In an economic context, forecasting models are judged in terms not only of accuracy, but also of profitability. The present paper analyses the counterintuitive relationship between accuracy and profitability in probabilistic (sports) forecasts in relation to betting markets. By making use of theoretical considerations, a simulation model, and real-world datasets from three different sports, we demonstrate the possibility of systematically or randomly generating positive betting returns in the absence of a superior model accuracy. The results have methodological implications for sports forecasting and other domains related to betting markets. Betting returns should not be treated as a valid measure of model accuracy, even though they can be regarded as an adequate measure of profitability. Hence, an improved predictive performance might be achieved by carefully considering the roles of both accuracy and profitability when designing models, or, more specifically, when assessing the in-sample fit of data and evaluating out-of-sample forecasting performances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号