首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years Statistics Netherlands has published several stochastic population forecasts. The degree of uncertainty of the future population is assessed on the basis of assumptions about the probability distribution of future fertility, mortality and migration. The assumptions on fertility are based on an analysis of historic forecasts of the total fertility rate (TFR), on time‐series models of observations of the TFR, and on expert knowledge. This latter argument‐based approach refers to the TFR distinguished by birth order. In the most recent Dutch forecast the 95% forecast interval of the total fertility rate in 2050 is assumed to range from 1.2 to 2.3 children per woman.  相似文献   

2.
The accuracy of population forecasts depends in part upon the method chosen for forecasting the vital rates of fertility, mortality, and migration. Methods for handling the stochastic propagation of error calculations in demographic forecasting are hard to do precisely. This paper discusses this obstacle in stochastic cohort-component population forecasts. The uncertainty of forecasts is due to uncertain estimates of the jump-off population and to errors in the forecasts of the vital rates. Empirically based of each source are presented and propagated through a simplified analytical model of population growth that allows assessment of the role of each component in the total error. Numerical estimates based on the errors of an actual vector ARIMA forecast of the US female population. These results broadly agree with those of the analytical model. This work especially uncertainty in the fertility forecasts to be so much higher than that in the other sources that the latter can be ignored in the propagation of error calculations for those cohorts that are born after the jump-off year of the forecast. A methodology is therefore presented which far simplifies the propagation of error calculations. It is noted, however, that the uncertainty of the jump-off population, migration, and mortality in the propagation of error for those alive at the jump-off time of the forecast must still be considered.  相似文献   

3.
We use ARCH time series models to derive model based prediction intervals for the Total Fertility Rate (TFR) in Norway, Sweden, Finland, and Denmark up to 2050. For the short term (5–10 yrs), expected TFR‐errors are compared with empirical forecast errors observed in historical population forecasts prepared by the statistical agencies in these countries since 1969. Medium‐term and long‐term (up to 50 years) errors are compared with error patterns based on so‐called naïve forecasts, i.e. forecasts that assume that recently observed TFR‐levels also apply for the future.  相似文献   

4.
Probabilistic population forecasts are useful because they describe uncertainty in a quantitatively useful way. One approach (that we call LT) uses historical data to estimate stochastic models (e.g., a time series model) of vital rates, and then makes forecasts. Another (we call it RS) began as a kind of randomized scenario: we consider its simplest variant, in which expert opinion is used to make probability distributions for terminal vital rates, and smooth trajectories are followed over time. We use analysis and examples to show several key differences between these methods: serial correlations in the forecast are much smaller in LT; the variance in LT models of vital rates (especially fertility) is much higher than in RS models that are based on official expert scenarios; trajectories in LT are much more irregular than in RS; probability intervals in LT tend to widen faster over forecast time. Newer versions of RS have been developed that reduce or eliminate some of these differences.  相似文献   

5.
Multi-population mortality forecasting has become an increasingly important area in actuarial science and demography, as a means to avoid long-run divergence in mortality projections. This paper aims to establish a unified state-space Bayesian framework to model, estimate, and forecast mortality rates in a multi-population context. In this regard, we reformulate the augmented common factor model to account for structural/trend changes in the mortality indexes. We conduct a Bayesian analysis to make inferences and generate forecasts so that process and parameter uncertainties can be considered simultaneously and appropriately. We illustrate the efficiency of our methodology through two case studies. Both point and probabilistic forecast evaluations are considered in the empirical analysis. The derived results support the fact that the incorporation of stochastic drifts mitigates the impact of the structural changes in the time indexes on mortality projections.  相似文献   

6.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

7.
This paper considers forecasts with distribution functions that may vary through time. The forecast is achieved by time varying combinations of individual forecasts. We derive theoretical worst case bounds for general algorithms based on multiplicative updates of the combination weights. The bounds are useful for studying properties of forecast combinations when data are non-stationary and there is no unique best model.  相似文献   

8.
Previous work on characterising the distribution of forecast errors in time series models by statistics such as the asymptotic mean square error has assumed that observations used in estimating parameters are statistically independent of those used to construct the forecasts themselves. This assumption is quite unrealistic in practical situations and the present paper is intended to tackle the question of how the statistical dependence between the parameter estimates and the final period observations used to generate forecasts affects the sampling distribution of the forecast errors. We concentrate on the first-order autoregression and, for this model, show that the conditional distribution of forecast errors given the final period observation is skewed towards the origin and that this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the final period observation.  相似文献   

9.
This paper considers nonparametric and semiparametric regression models subject to monotonicity constraint. We use bagging as an alternative approach to Hall and Huang (2001). Asymptotic properties of our proposed estimators and forecasts are established. Monte Carlo simulation is conducted to show their finite sample performance. An application to predicting equity premium is taken for illustration. We introduce a new forecasting evaluation criterion based on the second order stochastic dominance in the size of forecast errors and compare models over different sizes of forecast errors. Imposing monotonicity constraint can mitigate the chance of making large size forecast errors.  相似文献   

10.
This Briefing Paper is the last of a series of three about forecasting. In this one we examine our forecasting record; it complements the February paper in which we analysed the properties of our forecasting model in terms of the error bands attached to the central forecast.
There are many ways of measuring forecasting errors, and in the first part of this Briefing Paper we describe briefing how we have tackled the problem. (A more detailed analysis can be found in the Appendix.) In Part II we report and comment upon the errors in our forecasts of annual growth rates and show how our forecasting performance has improved over the years. In Part III we focus on quarterly forecasts up to 8 quarters ahead, and compare our forecasting errors with measurement errors in the oficial statistics; with the estimation errors built into our forecast equations; and with the stochastic model errors we reported last February. A brief summary of the main conclusions is given below.  相似文献   

11.
Stochastic demographic forecasting   总被引:1,自引:0,他引:1  
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary....Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."  相似文献   

12.
Historical evidence shows that demographic forecasts, including mortality forecasts, have often been grossly in error. One consequence of this is that forecasts are updated frequently. How should individuals or institutions react to updates, given that these are likewise expected to be uncertain? We discuss this problem in the context of a life cycle saving and labor supply problem, in which a cohort of workers decides how much to work and how much to save for mutual pensions. Mortality is stochastic and point forecasts are updated regularly. A Markovian approximation for the predictive distribution of mortality is derived. This renders the model computationally tractable, and allows us to compare a theoretically optimal rational expectations solution to a strategy in which the cohort merely updates the life cycle plan to match each updated mortality forecast. The implications of the analyses for overlapping generations modeling of pension systems are pointed out.  相似文献   

13.
The U.S. Census Bureau is approaching a critical decision regarding a major facet of its methodology for forecasting the United States population. In the past, it has relied on alternative scenarios, low, medium, and high, to reflect varying interpretations of current trends in fertility, mortality, and international migration to forecast population. This approach has limitations that have been noted in the recent literature on population forecasting. Census Bureau researchers are embarking on an attempt to incorporate probabilistic reasoning to forecast prediction intervals around point forecasts to future dates. The current literature offers a choice of approaches to this problem. We are opting to employ formal time series modeling of parameters of fertility, mortality, and international migration, with stochastic renewal processes. The endeavor is complicated by the administrative necessity to produce a large amount of racial and Hispanic origin detail in the population, as well as the ubiquitous cross-categories of age and sex. As official population forecasts must meet user demand, we are faced with the added challenge of presenting and supporting the resulting product in a way that is comprehensible to users, many of whom have little or no comprehension of the technical forecasting literature, and are accustomed to simple, deterministic answers. We may well find a need to modify our strategy, depending on the realities that may emerge from the limitations of data, the administrative requirements of the product, and the diverse needs of our user community.  相似文献   

14.
"This paper presents a stochastic version of the demographic cohort-component method of forecasting future population. In this model the sizes of future age-sex groups are non-linear functions of random future vital rates. An approximation to their joint distribution can be obtained using linear approximations or simulation. A stochastic formulation points to the need for new empirical work on both the autocorrelations and the cross-correlations of the vital rates. Problems of forecasting declining mortality and fluctuating fertility are contrasted. A volatility measure for fertility is presented. The model can be used to calculate approximate prediction intervals for births using data from deterministic cohort-component forecasts. The paper compares the use of expert opinion in mortality forecasting with simple extrapolation techniques to see how useful each approach has been in the past. Data from the United States suggest that expert opinion may have caused systematic bias in the forecasts."  相似文献   

15.
Does the use of information on the past history of the nominal interest rates and inflation entail improvement in forecasts of the ex ante real interest rate over its forecasts obtained from using just the past history of the realized real interest rates? To answer this question we set up a univariate unobserved components model for the realized real interest rates and a bivariate model for the nominal rate and inflation which imposes cointegration restrictions between them. The two models are estimated under normality with the Kalman filter. It is found that the error-correction model provides more accurate one-period ahead forecasts of the real rate within the estimation sample whereas the unobserved components model yields forecasts with smaller forecast variances. In the post-sample period, the forecasts from the bivariate model are not only more accurate but also have tighter confidence bounds than the forecasts from the unobserved components model.  相似文献   

16.
《Economic Systems》2014,38(2):194-204
Understanding how agents formulate their expectations about Fed behavior is important for market participants because they can potentially use this information to make more accurate estimates of stock and bond prices. Although it is commonly assumed that agents learn over time, there is scant empirical evidence in support of this assumption. Thus, in this paper we test if the forecast of the three month T-bill rate in the Survey of Professional Forecasters (SPF) is consistent with least squares learning when there are discrete shifts in monetary policy. We first derive the mean, variance and autocovariances of the forecast errors from a recursive least squares learning algorithm when there are breaks in the structure of the model. We then apply the Bai and Perron (1998) test for structural change to a forecasting model for the three month T-bill rate in order to identify changes in monetary policy. Having identified the policy regimes, we then estimate the implied biases in the interest rate forecasts within each regime. We find that when the forecast errors from the SPF are corrected for the biases due to shifts in policy, the forecasts are consistent with least squares learning.  相似文献   

17.
We propose a novel mixed-frequency dynamic factor model with time-varying parameters and stochastic volatility for macroeconomic nowcasting and develop a fast estimation algorithm. This enables us to generate forecast densities based on a large space of factor models. We apply our framework to nowcast US GDP growth in real time. Our results reveal that stochastic volatility seems to improve the accuracy of point forecasts the most, compared to the constant-parameter factor model. These gains are most prominent during unstable periods such as the Covid-19 pandemic. Finally, we highlight indicators driving the US GDP growth forecasts and associated downside risks in real time.  相似文献   

18.
This study analyzes the consequences of the capitalization of development expenditures under IAS 38 on analysts’ earnings forecasts. We use unique hand‐collected data in a sample of highly research and development (R&D)‐intensive German‐listed firms over the period 2000–2007. We find that the capitalization of development costs is significantly associated with both higher individual analysts’ forecast errors and forecast dispersion. This suggests that the increasing complexity surrounding the capitalization of development costs negatively impacts forecast accuracy. However, for firms with high underlying environmental uncertainty, forecast errors are negatively associated with capitalized development expenditures. This indicates that the negative impact of increased complexity on forecast accuracy can be outweighed by the information contained in the signals from capitalized development costs when the underlying environmental uncertainty is high. The findings contribute to the ongoing controversial debate on the accounting for self‐generated intangible assets. Our results provide useful insights on the link between capitalization of development costs, environmental uncertainty, and analysts’ forecasts for accounting academics and practitioners alike.  相似文献   

19.
A stochastic coefficients model developed by Swamy and Tinsley is used to forecast agricultural investment. In two sets of out-of-sample forecasts, one for 5 years, the other for 10 years, the Swamy-Tinsley stochastic coefficients model outperforms competing fixed and stochastic coefficients empirical models of agricultural investment for a wide array of risk functions. The Swamy-Tinsley stochastic coefficients investment model forecasts continued declines in net investment for farm machinery, with greater declines toward the end of the forecast period. The Swamy-Tinsley method produced better predictions than both stochastic and fixed-coefficients competitors.  相似文献   

20.
This paper aims to explore the potential effects of trend type, noise and forecast horizon on experts' and novices' probabilistic forecasts. The subjects made forecasts over six time horizons from simulated monthly currency series based on a random walk, with zero, constant and stochastic drift, at two noise levels. The difference between the Mean Absolute Probability Score of each participant and an AR(1) model was used to evaluate performance. The results showed that the experts performed better than the novices, although worse than the model except in the case of zero drift series. No clear expertise effects occurred over horizons, albeit subjects' performance relative to the model improved as the horizon increased. Possible explanations are offered and some suggestions for future research are outlined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号