首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stochastic demographic forecasting   总被引:1,自引:0,他引:1  
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary....Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."  相似文献   

2.
This discussion of modeling focuses on the difficulties in longterm, time-series forecasting of US fertility. Four possibilities are suggested. One difficulty with the traditional approach of using high or low bounds on fertility and mortality is that forecast errors are perfectly correlated over time, which means there are no cancellation of errors over time. The shape of future fertility intervals first increases, then stabilizes, and then decreases instead of remaining stable. This occurs because the number of terms being averaged increases with horizontal length. Alho and Spencer attempted to reduce these errors in time-series. Other difficulties are the idiosyncratic behavior of age specific fertility over time, biological bounds for total fertility rates (TFR) of 16 and zero, the integration of knowledge about fertility behavior that narrows the bounds, the unlikelihood of some probability outcomes of stochastic models with a normally distributed error term, the small relative change in TFR between years, a US fertility cycle of about 40 years, unimportant extrapolation of past trends in child and infant mortality, and the unlikelihood of reversals in mortality and contraceptive use trends. Another problem is the unsuitability of longterm forecasts. New methods include a model which estimates a one parameter family of fertility schedules and then forecasts that single parameter. Another method is a logistic transformation to account for prior information on the bounds on fertility estimates; this method is similar to Bayesian methods for ARMA models developed by Monahan. Models include information on the ultimate level of fertility and assume that the equilibrium level is a stochastic process trending over time. The horizon forecast method is preferred unless the effects of the outliers are known. Estimates of fertility are presented for the equilibrium constrained and logistic transformed model. Forecasts of age specific fertility rates can be calculated from forecasts of the fertility index (a single time varying parameter). The model of fertility fits poorly at older ages but captures some of the wide swings in the historical pattern. Age variations are not accounted for very well. Longterm forecasts tell a great deal about the uncertainty of forecast errors. Estimates are too sensitive to model specification for accuracy and ignore the biological and socioeconomic context.  相似文献   

3.
The accuracy of population forecasts depends in part upon the method chosen for forecasting the vital rates of fertility, mortality, and migration. Methods for handling the stochastic propagation of error calculations in demographic forecasting are hard to do precisely. This paper discusses this obstacle in stochastic cohort-component population forecasts. The uncertainty of forecasts is due to uncertain estimates of the jump-off population and to errors in the forecasts of the vital rates. Empirically based of each source are presented and propagated through a simplified analytical model of population growth that allows assessment of the role of each component in the total error. Numerical estimates based on the errors of an actual vector ARIMA forecast of the US female population. These results broadly agree with those of the analytical model. This work especially uncertainty in the fertility forecasts to be so much higher than that in the other sources that the latter can be ignored in the propagation of error calculations for those cohorts that are born after the jump-off year of the forecast. A methodology is therefore presented which far simplifies the propagation of error calculations. It is noted, however, that the uncertainty of the jump-off population, migration, and mortality in the propagation of error for those alive at the jump-off time of the forecast must still be considered.  相似文献   

4.
Frequent updates of planning forecasts and scheduling would permit a company to head-off impending layoffs, as well as to take advantage of changes in sales forecasts to eliminate or reduce over- and under-shoot in manning schedules. With the computer to minimize time and cost, such updates are practical.  相似文献   

5.
Mortality forecasting has crucial implications for insurance and pension policies. A large amount of literature has proposed models to forecast mortality using cross-sectional (period) data instead of longitudinal (cohort) data. As a consequence, decisions are generally based on period life tables and summary measures such as period life expectancy, which reflect hypothetical mortality rather than the mortality actually experienced by a cohort. This study introduces a novel method to forecast cohort mortality and the cohort life expectancy of non-extinct cohorts. The intent is to complete the mortality profile of cohorts born up to 1960. The proposed method is based on the penalized composite link model for ungrouping data. The performance of the method is investigated using cohort mortality data retrieved from the Human Mortality Database for England & Wales, Sweden, and Switzerland for male and female populations.  相似文献   

6.
This paper considers forecasts with distribution functions that may vary through time. The forecast is achieved by time varying combinations of individual forecasts. We derive theoretical worst case bounds for general algorithms based on multiplicative updates of the combination weights. The bounds are useful for studying properties of forecast combinations when data are non-stationary and there is no unique best model.  相似文献   

7.
This paper studies inflation forecasting based on the Bayesian learning algorithm which simultaneously learns about parameters and state variables. The Bayesian learning method updates posterior beliefs with accumulating information from inflation and disagreement about expected inflation from the Survey of Professional Forecasters (SPF). The empirical results show that Bayesian learning helps refine inflation forecasts at all horizons over time. Incorporating a Student’s t innovation improves the accuracy of long-term inflation forecasts. Including disagreement has an effect on refining short-term inflation density forecasts. Furthermore, there is strong evidence supporting a positive correlation between disagreement and trend inflation uncertainty. Our findings are helpful for policymakers when they forecast the future and make forward-looking decisions.  相似文献   

8.
Problems concerning the development of long-term forecasts of age variations in the labor supply are discussed. The author examines problems related to using the life cycle theory of labor supply to analyze such variations. An alternative theory, which suggests that the labor supply of younger and older groups is influenced by subsidies from prime-aged groups, is proposed and tested using U.S. data for the last 60 years.  相似文献   

9.
Counternarcotics interdiction efforts have traditionally relied on historically determined sorting criteria or “best guess” to find and classify suspected smuggling traffic. We present a more quantitative approach which incorporates customized database applications, graphics software and statistical modeling techniques to develop forecasting and classification models. Preliminary results show that statistical methodology can improve interdiction rates and reduce forecast error. The idea of predictive modeling is thus gaining support in the counterdrug community. The problem is divided into sea, air and land forecasting, only part of which will be addressed here. The maritime problem is solved using multiple regression in lieu of multivariate time series. This model predicts illegal boat counts by behavior and geographic region. We developed support software to present the forecasts and to automate the process of performing periodic model updates. During the period, the model was in use at. Coast Guard Headquarters. Because of deterrence provided by improved intervention, the vessel seizure rate declined from 1 every 36 hours to 1 every 6 months. Due in part to the success of the sea model, the maritime movement of marijuana has ceased to be a major threat. The air problem is more complex, and required us to locally design data collection and display software. Intelligence analysts are using a customized relational database application with a map overlay to perform visual pattern recognition of smuggling routes. We are solving the modeling portion of the air problem using multiple regression for regional forecasts of traffic density, and discriminant analysis to develop tactical models that classify “good guys” and “bad guys”. The air models are still under development, but we discuss some modeling considerations and preliminary results. The land problem is even more difficult, and data collection is still in progress.  相似文献   

10.
We consider the dynamic factor model and show how smoothness restrictions can be imposed on factor loadings by using cubic spline functions. We develop statistical procedures based on Wald, Lagrange multiplier and likelihood ratio tests for this purpose. The methodology is illustrated by analyzing a newly updated monthly time series panel of US term structure of interest rates. Dynamic factor models with and without smooth loadings are compared with dynamic models based on Nelson–Siegel and cubic spline yield curves. We conclude that smoothness restrictions on factor loadings are supported by the interest rate data and can lead to more accurate forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
We consider two criteria for evaluating election forecasts: accuracy (precision) and lead (distance from the event), specifically the trade-off between the two in poll-based forecasts. We evaluate how much “lead” still allows prediction of the election outcome. How much further back can we go, supposing we tolerate a little more error? Our analysis offers estimates of the “optimal” lead time for election forecasts, based on a dataset of over 26,000 vote intention polls from 338 elections in 44 countries between 1942 and 2014. We find that optimization of a forecast is possible, and typically occurs two to three months before the election, but can be influenced by the arrangement of political institutions. To demonstrate how our optimization guidelines perform in practice, we consider recent elections in the UK, the US, and France.  相似文献   

12.
Stochastic methods of multi-state population modeling are less developed than methods for single states for two reasons. First, the structure of a multi-state population is inherently more complex than that of a single state because of state-to-state transitions. Second, estimates of cross-state correlations of the vital processes are a largely uncharted territory. Unlike multi-state lifetable theory, in forecasting applications the role of directed flows from state to state is often less important than the overall coherence of the assumptions concerning the vital processes. This is the case in the context of the European Union. Thus, a simplified approach is feasible, in which migration is represented by state-specific net numbers of migrants. This allows the use of existing single-state software, when simulations are suitably organized, in a multi-state setting. To address the second problem, we provide empirical estimates of cross-country covariances in the forecast uncertainty of fertility, mortality, and net migration. Together with point forecasts of these parameters that are coherent across countries, this produces coherent forecasts for aggregates of countries. The finding is that models for intermediate correlations are necessary for a proper accounting of forecast uncertainty at the aggregate level, in this case the European Union.  相似文献   

13.
This paper examines the effectiveness of three commonly practiced methods used to resolve uncertainty in multi-stage manufacturing systems: safety stock under regenerative material requirements planning (MRP) updates, safety capacity under regenerative MRP updates, and net change MRP updates, i.e., continuous rather than regenerative (periodic) updates. The use of safety stock reflects a decision to permanently store materials and labor capacity in the form of inventory. When unexpected shortages arise between regenerative MRP updates, safety stock may be depleted but it will be replenished in subsequent periods. The second method, safety capacity, overstates the MRP capacity requirements at the individual work centers by a prescribed amount of direct labor. Safety capacity either will be allocated to unanticipated requirements which arise between MRP regenerations or will be spent as idle time. The third method, net change, offers a means of dealing with uncertainty by rescheduling instead of buffering, provided there is sufficient lead time to execute the changes in the material and capacity plans.Much of the inventory management research has addressed the use of safety stock as a buffer against uncertainty for a single product and manufacturing stage. However, there has been no work which evaluates the performance of safety stock relative to other resolution methods such as safety capacity or more frequent planning revisions. In this paper, a simulation model of a multi-stage (fabrication and assembly) process is used to characterize the behavior of the three resolution methods when errors are present in the demand and time standard estimates. Four end products are completed at an assembly center and altogether, the end products require the fabrication of twelve component parts in a job shop which contains eight work centers. In addition to the examination of the three methods under different sources and levels of uncertainty, different levels of bill of material commonality, MRP planned lead times, MRP lot sizes, equipment set-up times and priority dispatching rules are considered in the experimental design.The simulation results indicate that the choice among methods depends upon the source of uncertainty, and costs related to regular time employment, employment changes, equipment set ups and materials investment. For example, the choice between safety stock and safety capacity represents a compromise between materials investment and regular time employment costs. The net change method is not designed to deal effectively with time standard errors, although its use may be preferred over the two buffering alternatives when errors are present in the demand forecasts and when the costs of employment changes and equipment set ups are low. The simulation results also indicate that regardless of the method used, efforts to improve forecasts of demands or processing times may be justified by corresponding improvements in manufacturing performance.  相似文献   

14.
The planning of municipal service delivery systems requires accurate forecasts of demand, and particularly of the effects the quality of service delivery has on demand. A metholology for this problem should meet three criteria, if it is to be useful for municipal planning: it must be low-cost and use generally available data; it must be based on user behavior, so that the effects of policy changes can be correctly attributed; and it must allow testing of the transferability of the results, since this is required for general forecasting use. This paper develops such a methodology, based on econometric analysis of data from a number of service areas within a number of regions, forming a double cross-section. Empirical tests of the methodology were performed for two local government services where the effect of service quality on demand is important: sewer and highway construction, which have been hypothesized to affect the patterns of development within regions; and solid waste collection, where the level of service provided affects how much waste enters the collection system and how much is littered, burned or recycled. The two case studies and other analyses suggest that the methodology is a useful tool for testing whether policy changes have an effect on the demand for service, but not for accurate demand forecasting. Thus, these simple models are relevant for the role of screening the effect of policy changes, but more detailed and localized approaches are necessary for system design.  相似文献   

15.
There is general agreement in many forecasting contexts that combining individual predictions leads to better final forecasts. However, the relative error reduction in a combined forecast depends upon the extent to which the component forecasts contain unique/independent information. Unfortunately, obtaining independent predictions is difficult in many situations, as these forecasts may be based on similar statistical models and/or overlapping information. The current study addresses this problem by incorporating a measure of coherence into an analytic evaluation framework so that the degree of independence between sets of forecasts can be identified easily. The framework also decomposes the performance and coherence measures in order to illustrate the underlying aspects that are responsible for error reduction. The framework is demonstrated using UK retail prices index inflation forecasts for the period 1998–2014, and implications for forecast users are discussed.  相似文献   

16.
"This paper presents a stochastic version of the demographic cohort-component method of forecasting future population. In this model the sizes of future age-sex groups are non-linear functions of random future vital rates. An approximation to their joint distribution can be obtained using linear approximations or simulation. A stochastic formulation points to the need for new empirical work on both the autocorrelations and the cross-correlations of the vital rates. Problems of forecasting declining mortality and fluctuating fertility are contrasted. A volatility measure for fertility is presented. The model can be used to calculate approximate prediction intervals for births using data from deterministic cohort-component forecasts. The paper compares the use of expert opinion in mortality forecasting with simple extrapolation techniques to see how useful each approach has been in the past. Data from the United States suggest that expert opinion may have caused systematic bias in the forecasts."  相似文献   

17.
We study socially vs individually optimal life cycle allocations of consumption and health, when individual health care curbs own mortality but also has a spillover effect on other persons’ survival. Such spillovers arise, for instance, when health care activity at aggregate level triggers improvements in treatment through learning-by-doing (positive externality) or a deterioration in the quality of care through congestion (negative externality). We combine an age-structured optimal control model at population level with a conventional life cycle model to derive the social and private value of life. We then examine how individual incentives deviate from social incentives and how they can be aligned by way of a transfer scheme. The age-patterns of socially and individually optimal health expenditures and the transfer rate are derived. Numerical analysis illustrates the working of our model.  相似文献   

18.
我国香蕉秸秆回收利用现状研究   总被引:6,自引:0,他引:6  
朱晓闯  张喜瑞  李粤  梁栋 《价值工程》2011,30(34):273-274
香蕉属于热带水果,起源于亚洲东南部,是国际性大宗水果,具有生长周期短和产量高等特点,每年收获大量香蕉产品的同时也产生几乎等量的香蕉秸秆等农业废弃物。为此,香蕉秸秆如何进行回收利用已成为一个热议的科学课题。本论文结合研究现状,对我国香蕉秸秆回收利用的现状进行阐述。  相似文献   

19.
This study investigates how the revision frequency of earnings forecasts affects firm characteristics. Previous studies generally focus on the number of analysts following a firm to measure a firm's information environment. The frequency with which news is updated is often defined as an analyst's effort. Analysts provide more information to investors if they update news more frequently. This study examines whether the frequency of information updating for a particular firm affects the firm's performance. We apply three proxies for firm performance: stock liquidity, the cost of equity capital, and firm value. Our findings indicate that the analysts’ effort as measured by the frequency of news updating is effective in providing additional power beyond the number of analysts to represent the information environment of a firm. Therefore, this study suggests that combining both the number of analysts following a firm and the frequency of news updating can be a better proxy for assessing a firm's information environment.  相似文献   

20.
Regular business survey data are published as percentages of firms predicting higher, equal or lower values of some reference variable. Time series of such percentages do not fit production data too well. Univariate models often produce forecasts which are just as accurarate. Still, surveys contain anticipative judgement which, when combined with univariate modeling and proper filtering, may produce a good indicator for business cycle turning points. The way survey data are transformed so as to fit statistics on production seems not to be of much importance. A case study of the Finnish forest industry is offered as an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号