首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
This discussion of modeling focuses on the difficulties in longterm, time-series forecasting of US fertility. Four possibilities are suggested. One difficulty with the traditional approach of using high or low bounds on fertility and mortality is that forecast errors are perfectly correlated over time, which means there are no cancellation of errors over time. The shape of future fertility intervals first increases, then stabilizes, and then decreases instead of remaining stable. This occurs because the number of terms being averaged increases with horizontal length. Alho and Spencer attempted to reduce these errors in time-series. Other difficulties are the idiosyncratic behavior of age specific fertility over time, biological bounds for total fertility rates (TFR) of 16 and zero, the integration of knowledge about fertility behavior that narrows the bounds, the unlikelihood of some probability outcomes of stochastic models with a normally distributed error term, the small relative change in TFR between years, a US fertility cycle of about 40 years, unimportant extrapolation of past trends in child and infant mortality, and the unlikelihood of reversals in mortality and contraceptive use trends. Another problem is the unsuitability of longterm forecasts. New methods include a model which estimates a one parameter family of fertility schedules and then forecasts that single parameter. Another method is a logistic transformation to account for prior information on the bounds on fertility estimates; this method is similar to Bayesian methods for ARMA models developed by Monahan. Models include information on the ultimate level of fertility and assume that the equilibrium level is a stochastic process trending over time. The horizon forecast method is preferred unless the effects of the outliers are known. Estimates of fertility are presented for the equilibrium constrained and logistic transformed model. Forecasts of age specific fertility rates can be calculated from forecasts of the fertility index (a single time varying parameter). The model of fertility fits poorly at older ages but captures some of the wide swings in the historical pattern. Age variations are not accounted for very well. Longterm forecasts tell a great deal about the uncertainty of forecast errors. Estimates are too sensitive to model specification for accuracy and ignore the biological and socioeconomic context.  相似文献   

2.
This paper examines the forecasting implications of incorporating policy effects into the structure of unconditional time series models. The forecasting model is applied to the Puerto Rican experience with minimum wages from 1953 to 1982. The empirical results suggest that significant disemployment and unemployment followed the imposition of economy-wide minimum wages in 1974. The growth of employment suffered and the aggregate unemployment rate reached an unprecedented level. Multivariate time-series models for the employment-population ratio and the unemployment rate capture these effects well. They also forecast more accurately than unvariate and intervention models over the ex post period, 1983–1984. It is argued that models that combine subject matter-specific structure within a dynamic time-series framework greatly help to satisfy demands for theoretical consistency and forecast accuracy. Multivariate time-series models play an important complementary role in the structural modelling of economic policy analysis. This is particularly so when limitations of either data or theory preclude complete specification of structural equations.  相似文献   

3.
In this paper a VAR model is employed to construct a measure of the conditional expectations of the future yen/dollar spot rate. This measure allows us to examine the dynamics of an ex-ante time-series for the risk premium in the market. The VAR model produces ‘better’ forecasts than the survey responses for turbulent periods such as 1981–1982 and 1984–1985. The VAR-generated expectations are then used to construct a risk premium time-series. This risk premium series seems to be more reliable than the ones obtained using either survey data on expectations of the future spot exchange rate or the ex-post realized spot exchange rate. Tests on the risk premium series suggest that a risk premium was present, but that it was virtually constant throughout the sample. The conditional variance of the risk premium changed over time, but its unconditional distribution seemed stable across subsamples. Despite these features, the volatility of the series was substantial and varied considerably throughout the sample.  相似文献   

4.
Benchmarking by State Space Models   总被引:1,自引:0,他引:1  
We have a monthly series of observations which are obtained from sample surveys and are therefore subject to survey errors. We also have a series of annual values, called benchmarks, which are either exact or are substantially more accurate than the survey observations; these can be either annual totals or accurate values of the underlying variable at a particular month. The benchmarking problem is the problem of adjusting the monthly series to be consistent with the annual values. We provide two solutions to this problem. The first of these is a two-stage method in which we first fit a state space model to the monthly data alone and then combine the results obtained at this stage with the benchmark data. In the second solution we construct a single series from the monthly and annual values together and fit a state space model to this series in a single stage. The treatment is extended to series which behave multiplicatively. The methods are illustrated by applying them to Canadian retail sales sereis.  相似文献   

5.
Forecasting aggregates using panels of nonlinear time series   总被引:1,自引:0,他引:1  
Macroeconomic time series such as total unemployment or total industrial production concern data which are aggregated across regions, sectors, or age categories. In this paper we examine whether forecasts for these aggregates can be improved by considering panel models for the disaggregate series. As many macroeconomic variables have nonlinear properties, we specifically focus on panels of nonlinear time series. We discuss the representation of such models, parameter estimation and a method for generating forecasts. We illustrate the usefulness of our approach for simulated data and for the US coincident index, making use of state-specific component series.  相似文献   

6.
Intervention analysis has been recently the subject of several studies, mainly because real time series present a wide variety of phenomena that are caused by external and/or unexpected events. In this work, transfer functions are used to model different forms of intervention to the mean level of a time series. This is performed in the framework of state-space models. Two canonical forms of intervention are considered: pulse and step functions. Static and dynamic explanation of the intervention effects, normal and non-normal time series, detection of intervention, and study of the effect of outliers are also discussed. The performance of the two approaches is compared in terms of point and interval estimation through Monte Carlo simulation. The methodology was applied to real time series and showed satisfactory results for the intervention models used.  相似文献   

7.
The effects of specific policies and practices regarding employee job security rights on the evaluation of employers was investigated in two contexts. First, an experimental design was used to investigate the effect of explicit at-will and explicit good-cause policies on future job seekers' evaluation for a company's attractiveness and their willingness to sign up for an interview. The results support the argument that the kind of rights employers offer, or expressly deny, can significantly effect recruitment outcomes. Second, the potential role of formal employment at-will agreements as a source of inferences about the employer was investigated using open-ended questions that were responded to by currently employed managers and future job seekers. The results indicate that the use of formal at-will agreements may lead to a variety of negative inferences, giving employers reason to be concerned about the effect of such practices on employee relations.  相似文献   

8.
We analyze some of the difficulties of identifying multivariate time-series models with feedback, when carried through the two-stage procedure as in Haugh-Box (1977). We find that if the procedure is simplified, as in Jenkins (1979), the identified model can be seriously misspecified. When the procedure is applied correctly, it becomes extremely complicated, as in Granger-Newbold (1977). In such cases, there is a serious risk of overparametrization. The complication is mostly caused by the underlying structure of the multivariate model for the univariate innovations which can be considerably more complicated than the multivariate time-series model of interest.  相似文献   

9.
The time-series distributed lag techniques of econometrics can be usefully applied to cross-sectional, spatial and cross-section time-series situations. The application is perfectly natural in cross-section, time-series models when regression coefficients evolve systematically as the cross-section grouping variable changes. The evolution of such coefficients lends itself to polynomial approximation or more general smoothing restrictions. These ideas are not new, Gersovitz and McKinnon (1978) and Trivedi and Lee (1981) providing two of the earliest applications of cross-equation smoothing techniques. However, their applications were in the context of coefficient variation due to seasonal changes and this may account for the non-diffusion of these techniques. The approach here is illustrated in the context of age-specific household formation equations based on census data, using Almon polynomials when the regression coefficients vary systematically by age group. A second application is provided, using spatial data, explaining the incidence of crime, by region; using polynomial and geometric smoothing to model distance declining regional effects.  相似文献   

10.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

11.
In a regression context, consider the difference in expected outcome associated with a particular difference in one of the input variables. If the true regression relationship involves interactions, then this predictive comparison can depend on the values of the other input variables. Therefore, one may wish to consider an average predictive comparison as a target of inference, where the averaging is with respect to the population distribution of the input variables. We consider inferences about such targets, with emphasis on inferential performance when the regression model is misspecified. Particularly, in light of the difficulties in dealing with interaction terms in regression models, we examine inferences about average predictive comparisons when additive models are fitted to relationships truly involving pairwise interaction terms. We identify some circumstances where such inferences are consistent despite the model misspecification, notably when the input variables are independent, or have a multivariate normal distribution.  相似文献   

12.
A Bayesian hierarchical mixed model is developed for multiple comparisons under a simple order restriction. The model facilitates inferences on the successive differences of the population means, for which we choose independent prior distributions that are mixtures of an exponential distribution and a discrete distribution with its entire mass at zero. We employ Markov Chain Monte Carlo (MCMC) techniques to obtain parameter estimates and estimates of the posterior probabilities that any two of the means are equal. The latter estimates allow one both to determine if any two means are significantly different and to test the homogeneity of all of the means. We investigate the performance of the model-based inferences with simulated data sets, focusing on parameter estimation and successive-mean comparisons using posterior probabilities. We then illustrate the utility of the model in an application based on data from a study designed to reduce lead blood concentrations in children with elevated levels. Our results show that the proposed hierarchical model can effectively unify parameter estimation, tests of hypotheses and multiple comparisons in one setting.  相似文献   

13.
石梓涵 《价值工程》2011,30(3):322-322
人均GDP是衡量一个国家和地区经济发展水平和综合经济实力的重要指标。本文在相关背景下收集了1978-2008年中国人均GDP时间序列数据,应用了SPSS软件进行数据分析并建立时间序列模型,利用模型预测了2009,2010年人均GDP数值,对制定相应的宏观调控政策有十分重要的意义。  相似文献   

14.
This paper discusses some simple practical advantages of Markov chain Monte Carlo (MCMC) methods in estimating entry and exit transition probabilities from repeated independent surveys. Simulated data are used to illustrate the usefulness of MCMC methods when the likelihood function has multiple local maxima. Actual data on the evaluation of an HIV prevention intervention program among drug users are used to demonstrate the advantage of using prior information to enhance parameter identificaiton. The latter example also demonstrates an important strength of the MCMC approach, namely the ability to make inferences on arbitrary functions of model parameters.  相似文献   

15.
Wei Yu  Cuizhen Niu  Wangli Xu 《Metrika》2014,77(5):675-693
In this paper, we use the empirical likelihood method to make inferences for the coefficient difference of a two-sample linear regression model with missing response data. The commonly used empirical likelihood ratio is not concave for this problem, so we append a natural and well-explained condition to the likelihood function and propose three types of restricted empirical likelihood ratios for constructing the confidence region of the parameter in question. It can be demonstrated that all three empirical likelihood ratios have, asymptotically, chi-squared distributions. Simulation studies are carried out to show the effectiveness of the proposed approaches in aspects of coverage probability and interval length. A real data set is analysed with our methods as an example.  相似文献   

16.
The stylized facts of macroeconomic time series can be presented by fitting structural time series models. Within this framework, we analyse the consequences of the widely used detrending technique popularised by Hodrick and Prescott (1980). It is shown that mechanical detrending based on the Hodrick–Prescott filter can lead investigators to report spurious cyclical behaviour, and this point is illustrated with empirical examples. Structural time-series models also allow investigators to deal explicitly with seasonal and irregular movements that may distort estimated cyclical components. Finally, the structural framework provides a basis for exposing the limitations of ARIMA methodology and models based on a deterministic trend with a single break.  相似文献   

17.
Deep neural networks and gradient boosted tree models have swept across the field of machine learning over the past decade, producing across-the-board advances in performance. The ability of these methods to capture feature interactions and nonlinearities makes them exceptionally powerful and, at the same time, prone to overfitting, leakage, and a lack of generalization in domains with target non-stationarity and collinearity, such as time-series forecasting. We offer guidance to address these difficulties and provide a framework that maximizes the chances of predictions that generalize well and deliver state-of-the-art performance. The techniques we offer for cross-validation, augmentation, and parameter tuning have been used to win several major time-series forecasting competitions—including the M5 Forecasting Uncertainty competition and the Kaggle COVID19 Forecasting series—and, with the proper theoretical grounding, constitute the current best practices in time-series forecasting.  相似文献   

18.
This paper describes Bayesian techniques for analysing the effects of aggregate shocks on macroeconomic time-series. Rather than calculate point estimates of the response of a time-series to an aggregate shock, we calculate the whole probability density function of the response and use Monte-Carlo or Gibbs sampling techniques to evaluate its properties. The proposed techniques impose identification restrictions in a way that includes the uncertainty in these restrictions, and thus are an improvement over traditional approaches that typically use least-squares techniques supplemented by bootstrapping. We apply these techniques in the context of two different models. A key finding is that measures of uncertainty, such as posterior standard deviations, are much larger than are their classical counterparts.  相似文献   

19.
We propose a family of regression models to adjust for nonrandom dropouts in the analysis of longitudinal outcomes with fully observed covariates. The approach conceptually focuses on generalized linear models with random effects. A novel formulation of a shared random effects model is presented and shown to provide a dropout selection parameter with a meaningful interpretation. The proposed semiparametric and parametric models are made part of a sensitivity analysis to delineate the range of inferences consistent with observed data. Concerns about model identifiability are addressed by fixing some model parameters to construct functional estimators that are used as the basis of a global sensitivity test for parameter contrasts. Our simulation studies demonstrate a large reduction of bias for the semiparametric model relatively to the parametric model at times where the dropout rate is high or the dropout model is misspecified. The methodology's practical utility is illustrated in a data analysis.  相似文献   

20.
The effect of differencing all of the variables in a properly specified regression equation is examined. Excessive use of the difference transformation induces a non-invertible moving average (MA) process in the disturbances of the transformed regression. Monte Carlo techniques are used to examine the effects of overdifferencing on the efficiency of regression parameter estimates, inferences based on these estimates, and tests for overdifferenccing based on the estimator of the MA parameter for the disturbances of the differences regression. Overall, the problem of overdifferencing is not serious if careful attention is paid to the properties of the disturbances of regression equations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号