首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
This discussion of modeling focuses on the difficulties in longterm, time-series forecasting of US fertility. Four possibilities are suggested. One difficulty with the traditional approach of using high or low bounds on fertility and mortality is that forecast errors are perfectly correlated over time, which means there are no cancellation of errors over time. The shape of future fertility intervals first increases, then stabilizes, and then decreases instead of remaining stable. This occurs because the number of terms being averaged increases with horizontal length. Alho and Spencer attempted to reduce these errors in time-series. Other difficulties are the idiosyncratic behavior of age specific fertility over time, biological bounds for total fertility rates (TFR) of 16 and zero, the integration of knowledge about fertility behavior that narrows the bounds, the unlikelihood of some probability outcomes of stochastic models with a normally distributed error term, the small relative change in TFR between years, a US fertility cycle of about 40 years, unimportant extrapolation of past trends in child and infant mortality, and the unlikelihood of reversals in mortality and contraceptive use trends. Another problem is the unsuitability of longterm forecasts. New methods include a model which estimates a one parameter family of fertility schedules and then forecasts that single parameter. Another method is a logistic transformation to account for prior information on the bounds on fertility estimates; this method is similar to Bayesian methods for ARMA models developed by Monahan. Models include information on the ultimate level of fertility and assume that the equilibrium level is a stochastic process trending over time. The horizon forecast method is preferred unless the effects of the outliers are known. Estimates of fertility are presented for the equilibrium constrained and logistic transformed model. Forecasts of age specific fertility rates can be calculated from forecasts of the fertility index (a single time varying parameter). The model of fertility fits poorly at older ages but captures some of the wide swings in the historical pattern. Age variations are not accounted for very well. Longterm forecasts tell a great deal about the uncertainty of forecast errors. Estimates are too sensitive to model specification for accuracy and ignore the biological and socioeconomic context.  相似文献   

2.
Stochastic methods of multi-state population modeling are less developed than methods for single states for two reasons. First, the structure of a multi-state population is inherently more complex than that of a single state because of state-to-state transitions. Second, estimates of cross-state correlations of the vital processes are a largely uncharted territory. Unlike multi-state lifetable theory, in forecasting applications the role of directed flows from state to state is often less important than the overall coherence of the assumptions concerning the vital processes. This is the case in the context of the European Union. Thus, a simplified approach is feasible, in which migration is represented by state-specific net numbers of migrants. This allows the use of existing single-state software, when simulations are suitably organized, in a multi-state setting. To address the second problem, we provide empirical estimates of cross-country covariances in the forecast uncertainty of fertility, mortality, and net migration. Together with point forecasts of these parameters that are coherent across countries, this produces coherent forecasts for aggregates of countries. The finding is that models for intermediate correlations are necessary for a proper accounting of forecast uncertainty at the aggregate level, in this case the European Union.  相似文献   

3.
Stochastic demographic forecasting   总被引:1,自引:0,他引:1  
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary....Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."  相似文献   

4.
In recent years Statistics Netherlands has published several stochastic population forecasts. The degree of uncertainty of the future population is assessed on the basis of assumptions about the probability distribution of future fertility, mortality and migration. The assumptions on fertility are based on an analysis of historic forecasts of the total fertility rate (TFR), on time‐series models of observations of the TFR, and on expert knowledge. This latter argument‐based approach refers to the TFR distinguished by birth order. In the most recent Dutch forecast the 95% forecast interval of the total fertility rate in 2050 is assumed to range from 1.2 to 2.3 children per woman.  相似文献   

5.
"This paper presents a stochastic version of the demographic cohort-component method of forecasting future population. In this model the sizes of future age-sex groups are non-linear functions of random future vital rates. An approximation to their joint distribution can be obtained using linear approximations or simulation. A stochastic formulation points to the need for new empirical work on both the autocorrelations and the cross-correlations of the vital rates. Problems of forecasting declining mortality and fluctuating fertility are contrasted. A volatility measure for fertility is presented. The model can be used to calculate approximate prediction intervals for births using data from deterministic cohort-component forecasts. The paper compares the use of expert opinion in mortality forecasting with simple extrapolation techniques to see how useful each approach has been in the past. Data from the United States suggest that expert opinion may have caused systematic bias in the forecasts."  相似文献   

6.
The U.S. Census Bureau is approaching a critical decision regarding a major facet of its methodology for forecasting the United States population. In the past, it has relied on alternative scenarios, low, medium, and high, to reflect varying interpretations of current trends in fertility, mortality, and international migration to forecast population. This approach has limitations that have been noted in the recent literature on population forecasting. Census Bureau researchers are embarking on an attempt to incorporate probabilistic reasoning to forecast prediction intervals around point forecasts to future dates. The current literature offers a choice of approaches to this problem. We are opting to employ formal time series modeling of parameters of fertility, mortality, and international migration, with stochastic renewal processes. The endeavor is complicated by the administrative necessity to produce a large amount of racial and Hispanic origin detail in the population, as well as the ubiquitous cross-categories of age and sex. As official population forecasts must meet user demand, we are faced with the added challenge of presenting and supporting the resulting product in a way that is comprehensible to users, many of whom have little or no comprehension of the technical forecasting literature, and are accustomed to simple, deterministic answers. We may well find a need to modify our strategy, depending on the realities that may emerge from the limitations of data, the administrative requirements of the product, and the diverse needs of our user community.  相似文献   

7.
Probabilistic population forecasts are useful because they describe uncertainty in a quantitatively useful way. One approach (that we call LT) uses historical data to estimate stochastic models (e.g., a time series model) of vital rates, and then makes forecasts. Another (we call it RS) began as a kind of randomized scenario: we consider its simplest variant, in which expert opinion is used to make probability distributions for terminal vital rates, and smooth trajectories are followed over time. We use analysis and examples to show several key differences between these methods: serial correlations in the forecast are much smaller in LT; the variance in LT models of vital rates (especially fertility) is much higher than in RS models that are based on official expert scenarios; trajectories in LT are much more irregular than in RS; probability intervals in LT tend to widen faster over forecast time. Newer versions of RS have been developed that reduce or eliminate some of these differences.  相似文献   

8.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

9.
This paper develops methods for estimating and forecasting in Bayesian panel vector autoregressions of large dimensions with time‐varying parameters and stochastic volatility. We exploit a hierarchical prior that takes into account possible pooling restrictions involving both VAR coefficients and the error covariance matrix, and propose a Bayesian dynamic learning procedure that controls for various sources of model uncertainty. We tackle computational concerns by means of a simulation‐free algorithm that relies on analytical approximations to the posterior. We use our methods to forecast inflation rates in the eurozone and show that these forecasts are superior to alternative methods for large vector autoregressions.  相似文献   

10.
This Briefing Paper is the last of a series of three about forecasting. In this one we examine our forecasting record; it complements the February paper in which we analysed the properties of our forecasting model in terms of the error bands attached to the central forecast.
There are many ways of measuring forecasting errors, and in the first part of this Briefing Paper we describe briefing how we have tackled the problem. (A more detailed analysis can be found in the Appendix.) In Part II we report and comment upon the errors in our forecasts of annual growth rates and show how our forecasting performance has improved over the years. In Part III we focus on quarterly forecasts up to 8 quarters ahead, and compare our forecasting errors with measurement errors in the oficial statistics; with the estimation errors built into our forecast equations; and with the stochastic model errors we reported last February. A brief summary of the main conclusions is given below.  相似文献   

11.
Previous work on characterising the distribution of forecast errors in time series models by statistics such as the asymptotic mean square error has assumed that observations used in estimating parameters are statistically independent of those used to construct the forecasts themselves. This assumption is quite unrealistic in practical situations and the present paper is intended to tackle the question of how the statistical dependence between the parameter estimates and the final period observations used to generate forecasts affects the sampling distribution of the forecast errors. We concentrate on the first-order autoregression and, for this model, show that the conditional distribution of forecast errors given the final period observation is skewed towards the origin and that this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the final period observation.  相似文献   

12.
Local and state governments depend on small area population forecasts to make important decisions concerning the development of local infrastructure and services. Despite their importance, current methods often produce highly inaccurate forecasts. Recent years have witnessed promising developments in time series forecasting using Machine Learning across a wide range of social and economic variables. However, limited work has been undertaken to investigate the potential application of Machine Learning methods in demography, particularly for small area population forecasting. In this paper we describe the development of two Long-Short Term Memory network architectures for small area populations. We employ the Keras Tuner to select layer unit numbers, vary the window width of input data, and apply a double training and validation regime which supports work with short time series and prioritises later sequence values for forecasts. These methods are transferable and can be applied to other data sets. Retrospective small area population forecasts for Australia were created for the periods 2006–16 and 2011–16. Model performance was evaluated against actual data and two benchmark methods (LIN/EXP and CSP-VSG). We also evaluated the impact of constraining small area population forecasts to an independent national forecast. Forecast accuracy was influenced by jump-off year, constraining, area size, and remoteness. The LIN/EXP model was the best performing method for the 2011-based forecasts whilst deep learning methods performed best for the 2006-based forecasts, including significant improvements in the accuracy of 10 year forecasts. However, benchmark methods were consistently more accurate for more remote areas and for those with populations <5000.  相似文献   

13.
Weather forecasts are an important input to many electricity demand forecasting models. This study investigates the use of weather ensemble predictions in electricity demand forecasting for lead times from 1 to 10 days ahead. A weather ensemble prediction consists of 51 scenarios for a weather variable. We use these scenarios to produce 51 scenarios for the weather-related component of electricity demand. The results show that the average of the demand scenarios is a more accurate demand forecast than that produced using traditional weather forecasts. We use the distribution of the demand scenarios to estimate the demand forecast uncertainty. This compares favourably with estimates produced using univariate volatility forecasting methods.  相似文献   

14.
Simpler Probabilistic Population Forecasts: Making Scenarios Work   总被引:1,自引:0,他引:1  
The traditional high-low-medium scenario approach to quantifying uncertainty in population forecasts has been criticized as lacking probabilistic meaning and consistency. This paper shows, under certain assumptions, how appropriately calibrated scenarios can be used to approximate the uncertainty intervals on future population size and age structure obtained with fully stochastic forecasts. As many forecasting organizations already produce scenarios and because dealing with them is familiar territory, the methods presented here offer an attractive intermediate position between probabilistically inconsistent scenario analysis and fully stochastic forecasts.  相似文献   

15.
Macroeconomic data are subject to data revisions. Yet, the usual way of generating real-time density forecasts from Bayesian Vector Autoregressive (BVAR) models makes no allowance for data uncertainty from future data revisions. We develop methods of allowing for data uncertainty when forecasting with BVAR models with stochastic volatility. First, the BVAR forecasting model is estimated on real-time vintages. Second, the BVAR model is jointly estimated with a model of data revisions such that forecasts are conditioned on estimates of the ‘true’ values. We find that this second method generally improves upon conventional practice for density forecasting, especially for the United States.  相似文献   

16.
《Socio》1986,20(1):51-55
Studies have suggested that a composite forecast may be preferred to a single forecast. In addition, forecasting objectives are often conflicting. For example, one forecast may have the smallest sum of absolute forecast errors, while another has the smallest maximum absolute error. This paper examines the appropriateness of using multiple objective linear programming to determine weighted linear combinations of forecasts to be used as inputs for policy analysis. An example is presented where the methodology is used to combine the forecasts for several policy variables. The forecasts are selected from large econometric, consensus, and univariate time series models.  相似文献   

17.
Peter A. Rogerson 《Socio》1983,17(5-6):373-380
When forecasting aggregate variables, a choice must often be made to either add up individual forecasts made at a disaggregate level or to simply forecast at the aggregate level. The presence of heterogeneity introduces aggregation bias and makes the disaggregates approach more preferable, while the presence of data and specification errors introduces relatively large variances in the disaggregate forecasts, making the aggregate approach more preferable. It is suggested that the mean square error is useful in evaluating the combined effects of heterogeneity and specification and data errors, and in facilitating comparisons between aggregate and disaggregate approaches to aggregate variable forecasting.  相似文献   

18.
Multi-population mortality forecasting has become an increasingly important area in actuarial science and demography, as a means to avoid long-run divergence in mortality projections. This paper aims to establish a unified state-space Bayesian framework to model, estimate, and forecast mortality rates in a multi-population context. In this regard, we reformulate the augmented common factor model to account for structural/trend changes in the mortality indexes. We conduct a Bayesian analysis to make inferences and generate forecasts so that process and parameter uncertainties can be considered simultaneously and appropriately. We illustrate the efficiency of our methodology through two case studies. Both point and probabilistic forecast evaluations are considered in the empirical analysis. The derived results support the fact that the incorporation of stochastic drifts mitigates the impact of the structural changes in the time indexes on mortality projections.  相似文献   

19.
This paper considers nonparametric and semiparametric regression models subject to monotonicity constraint. We use bagging as an alternative approach to Hall and Huang (2001). Asymptotic properties of our proposed estimators and forecasts are established. Monte Carlo simulation is conducted to show their finite sample performance. An application to predicting equity premium is taken for illustration. We introduce a new forecasting evaluation criterion based on the second order stochastic dominance in the size of forecast errors and compare models over different sizes of forecast errors. Imposing monotonicity constraint can mitigate the chance of making large size forecast errors.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号