首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper studies the efficient estimation of large‐dimensional factor models with both time and cross‐sectional dependence assuming (N,T) separability of the covariance matrix. The asymptotic distribution of the estimator of the factor and factor‐loading space under factor stationarity is derived and compared to that of the principal component (PC) estimator. The paper also considers the case when factors exhibit a unit root. We provide feasible estimators and show in a simulation study that they are more efficient than the PC estimator in finite samples. In application, the estimation procedure is employed to estimate the Lee–Carter model and life expectancy is forecast. The Dutch gender gap is explored and the relationship between life expectancy and the level of economic development is examined in a cross‐country comparison.  相似文献   

2.
The Lee–Carter method for modeling and forecasting mortality has been shown to work quite well given long time series of data. Here we consider how it can be used when there are few observations at uneven intervals. Assuming that the underlying model is correct and that the mortality index follows a random walk with drift, we find the method can be used with sparse data. The central forecast depends mainly on the first and last observation, and so can be generated with just two observations, preferably not too close in time. With three data points, uncertainty can also be estimated, although such estimates of uncertainty are themselves highly uncertain and improve with additional observations. We apply the methods to China and South Korea, which have 3 and 20 data points, respectively, at uneven intervals.  相似文献   

3.
Accurate forecasts of mortality rates are essential to various types of demographic research like population projection, and to the pricing of insurance products such as pensions and annuities. Recent studies have considered a spatial–temporal vector autoregressive (STVAR) model for the mortality surface, where mortality rates of each age depend on the historical values for that age (temporality) and the neighboring cohorts ages (spatiality). This model has sound statistical properties including co-integrated dependent variables, the existence of closed-form solutions and a simple error structure. Despite its improved forecasting performance over the famous Lee–Carter (LC) model, the constraint that only the effects of the same and neighboring cohorts are significant can be too restrictive. In this study, we adopt the concept of hyperbolic memory to the spatial dimension and propose a hyperbolic STVAR (HSTVAR) model. Retaining all desirable features of the STVAR, our model uniformly beats the LC, the weighted functional demographic model, STVAR and sparse VAR counterparties for forecasting accuracy, when French and Spanish mortality data over 1950–2016 are considered. Simulation results also lead to robust conclusions. Long-term forecasting analyses up to 2050 comparing the four models are further performed. To illustrate the extensible feature of HSTVAR to a multi-population case, a two-population illustrative example using the same sample is further presented.  相似文献   

4.
In the context of predicting the term structure of interest rates, we explore the marginal predictive content of real-time macroeconomic diffusion indexes extracted from a “data rich” real-time data set, when used in dynamic Nelson–Siegel (NS) models of the variety discussed in Svensson (NBER technical report, 1994; NSS) and Diebold and Li (Journal of Econometrics, 2006, 130, 337–364; DNS). Our diffusion indexes are constructed using principal component analysis with both targeted and untargeted predictors, with targeting done using the lasso and elastic net. Our findings can be summarized as follows. First, the marginal predictive content of real-time diffusion indexes is significant for the preponderance of the individual models that we examine. The exception to this finding is the post “Great Recession” period. Second, forecast combinations that include only yield variables result in our most accurate predictions, for most sample periods and maturities. In this case, diffusion indexes do not have marginal predictive content for yields and do not seem to reflect unspanned risks. This points to the continuing usefulness of DNS and NSS models that are purely yield driven. Finally, we find that the use of fully revised macroeconomic data may have an important confounding effect upon results obtained when forecasting yields, as prior research has indicated that diffusion indexes are often useful for predicting yields when constructed using fully revised data, regardless of whether forecast combination is used, or not. Nevertheless, our findings also underscore the potential importance of using machine learning, data reduction, and shrinkage methods in contexts such as term structure modeling.  相似文献   

5.
During the last three decades, integer‐valued autoregressive process of order p [or INAR(p)] based on different operators have been proposed as a natural, intuitive and maybe efficient model for integer‐valued time‐series data. However, this literature is surprisingly mute on the usefulness of the standard AR(p) process, which is otherwise meant for continuous‐valued time‐series data. In this paper, we attempt to explore the usefulness of the standard AR(p) model for obtaining coherent forecasting from integer‐valued time series. First, some advantages of this standard Box–Jenkins's type AR(p) process are discussed. We then carry out our some simulation experiments, which show the adequacy of the proposed method over the available alternatives. Our simulation results indicate that even when samples are generated from INAR(p) process, Box–Jenkins's model performs as good as the INAR(p) processes especially with respect to mean forecast. Two real data sets have been employed to study the expediency of the standard AR(p) model for integer‐valued time‐series data.  相似文献   

6.
Baumeister and Kilian (Journal of Business and Economic Statistics, 2015, 33(3), 338–351) combine forecasts from six empirical models to predict real oil prices. In this paper, we broadly reproduce their main economic findings, employing their preferred measures of the real oil price and other real‐time variables. Mindful of the importance of Brent crude oil as a global price benchmark, we extend consideration to the North Sea‐based measure and update the evaluation sample to 2017:12. We model the oil price futures curve using a factor‐based Nelson–Siegel specification estimated in real time to fill in missing values for oil price futures in the raw data. We find that the combined forecasts for Brent are as effective as for other oil price measures. The extended sample using the oil price measures adopted by Baumeister and Kilian yields similar results to those reported in their paper. Also, the futures‐based model improves forecast accuracy at longer horizons.  相似文献   

7.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
We examine the properties and forecast performance of multiplicative volatility specifications that belong to the class of generalized autoregressive conditional heteroskedasticity–mixed-data sampling (GARCH-MIDAS) models suggested in Engle, Ghysels, and Sohn (Review of Economics and Statistics, 2013, 95, 776–797). In those models volatility is decomposed into a short-term GARCH component and a long-term component that is driven by an explanatory variable. We derive the kurtosis of returns, the autocorrelation function of squared returns, and the R2 of a Mincer–Zarnowitz regression and evaluate the QMLE and forecast performance of these models in a Monte Carlo simulation. For S&P 500 data, we compare the forecast performance of GARCH-MIDAS models with a wide range of competitor models such as HAR (heterogeneous autoregression), realized GARCH, HEAVY (high-frequency-based volatility) and Markov-switching GARCH. Our results show that the GARCH-MIDAS based on housing starts as an explanatory variable significantly outperforms all competitor models at forecast horizons of 2 and 3 months ahead.  相似文献   

9.
We use a macro‐finance model, incorporating macroeconomic and financial factors, to study the term premium in the US bond market. Estimating the model using Bayesian techniques, we find that a single factor explains most of the variation in bond risk premiums. Furthermore, the model‐implied risk premiums account for up to 40% of the variability of one‐ and two‐year excess returns. Using the model to decompose yield spreads into an expectations and a term premium component, we find that, although this decomposition does not seem important to forecast economic activity, it is crucial to forecast inflation for most forecasting horizons. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

11.
We build our analysis upon previous work by Bloom et al. (Measuring the Effect of Political Uncertainty. Working Paper, Stanford University, 2012) and Baker et al. (Political Uncertainty: A New Indicator. CentrePiece 2012; 16 (3): 21–23), who estimate the dynamic effects of a shock to a newly constructed surrogate measure of political uncertainty (PU) on the US economy. Comparable to their results we demonstrate that a shock to PU has pervasive effects on the dynamic evolution of the US economy. Using an estimated structural dynamic factor model we find that more globally integrated markets exhibit significantly more pronounced responses than other measures of real economic activity. Impulse responses reveal a small but statistically significant ‘flight‐to‐safety’ effect, depressing government bond yields across the entire term structure following a shock to PU. Forecast error variance decompositions are predominantly composed of supply, demand, and PU shocks over all horizons, with PU shocks contributing less and supply shocks contributing more to forecast errors at longer horizons. Technology shocks, by contrast, are found to affect forecast accuracy closer to impact with quickly decaying contributions over extended forecast horizons. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a Bayesian shrinkage approach for vector autoregressions (VARs) that uses short‐term survey forecasts as an additional source of information about model parameters. In particular, we augment the vector of dependent variables by their survey nowcasts, and claim that each variable modelled in the VAR and its nowcast are likely to depend in a similar way on the lagged dependent variables. In an application to macroeconomic data, we find that the forecasts obtained from a VAR fitted by our new shrinkage approach typically yield smaller mean squared forecast errors than the forecasts obtained from a range of benchmark methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
The introduction of the Lee–Carter (LC) method marked a breakthrough in mortality forecasting, providing a simple yet powerful data-driven stochastic approach. The method has the merit of capturing the dynamics of mortality change by a single time index that is almost invariably linear. This thirtieth anniversary review of its 1992 publication examines the LC method and the large body of research that it has since spawned. We first describe the method and present a 30-year ex post evaluation of the original LC forecast for U.S. mortality. We then review the most prominent extensions of the LC method in relation to the limitations that they sought to address. With a focus on the efficacy of the various extensions, we review existing evaluations and comparisons. To conclude, we juxtapose the two main statistical approaches used, discuss further issues, and identify several potential avenues for future research.  相似文献   

15.
The short end of the US$ term structure of interest rates is analysed allowing for the possibility of fractional integration and cointegration. This approach permits mean‐reverting dynamics for the data and the existence of a common long run stochastic trend to be maintained simultaneously. We estimate the model for the period 1963–2006 and find it compatible with this structure. The restriction that the data are I(1) and the errors are I(0) is rejected, mainly because the latter still display long memory. This result is consistent with a model of monetary policy in which the Central Bank operates affecting contracts with short term maturity, and the impulses are transmitted to contracts with longer maturities and then to the final goals. However, the transmission of the impulses along the term structure cannot be modelled using the Expectations Hypothesis.  相似文献   

16.
《Economic Outlook》2016,40(1):5-10
  • We expect global GDP growth to average 3.5% per year (at PPP exchange rates) over the next ten years. This is lower than the 3.8% recorded in 2000–14 though not dramatically so. There will be a modest recovery in advanced economy growth ‐ but not to pre‐crisis rates. Emerging market (EM) growth will slow but remain faster than growth in the advanced economies. And with EM's share in world GDP much increased from 10–15 years ago, EMs will continue to provide a large proportion of world growth.
  • EM growth is expected to run at around 4.5% per year in 2015–24, well down on the 6% seen in 2000–14. This includes a slowdown from around 10% to 5–6% in China ‐ but China's share in world GDP has risen so much that China's contribution to world growth will remain very substantial.
  • Advanced economies are forecast to grow by 1.9% per year in 2015–24, a big improvement from the 1% pace of 2007–14 (which was affected by the global financial crisis) but below the 1990–2014 average. Indeed, the gap between forecast G7 GDP and GDP extrapolated using pre‐crisis trends in potential output will remain large at 10–15% in 2015–24.
  • Global growth will remain relatively strong compared to much longer‐term averages: growth from 1870–1950 was only around 2% per year. But a return to such low growth rates looks unlikely; China and India were a major drag on world growth until the 1980s but are now fast growing regions.
  • Our forecast is relatively cautious about key growth factors; the contribution of productivity growth is expected to improve slightly, while those from capital accumulation and labour supply fall back. Demographics will be a more severe drag on growth from 2025–40. Overall, risks to our long‐term forecasts look to be skewed to the downside.
  相似文献   

17.
In contrast to health shocks, mortality shocks do not only induce direct costs such as medical and funeral expenses and possibly income loss, but also reduce the number of consumption units in the household. Using data from Indonesia, it is shown that the economic costs related to the death of children and older persons seem to be fully compensated for by the decrease in consumption units. In contrast, when prime‐age adults die, survivors face additional costs and, in consequence, use coping strategies. These strategies seem to be quite effective, although households may face higher long‐term vulnerability.  相似文献   

18.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

19.
We investigate whether TV watching at ages 6–7 and 8–9 affects cognitive development measured by math and reading scores at ages 8–9, using a rich childhood longitudinal sample from NLSY79. Dynamic panel data models are estimated to handle the unobserved child‐specific factor, endogeneity of TV watching, and dynamic nature of the causal relation. A special emphasis is placed on the last aspect, where TV watching affects cognitive development, which in turn affects future TV watching. When this feedback occurs, it is not straightforward to identify and estimate the TV effect. We develop a two‐stage estimation method which can deal with the feedback feature; we also apply the ‘standard’ econometric panel data approaches. Overall, for math score at ages 8–9, we find that watching TV during ages 6–7 and 8–9 has a negative total effect, mostly due to a large negative effect of TV watching at the younger ages 6–7. For reading score, there is evidence that watching no more than 2 hours of TV per day has a positive effect, whereas the effect is negative outside this range. In both cases, however, the effect magnitudes are economically small. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
We provide the first empirical application of a new approach proposed by Lee (Journal of Econometrics 2007; 140 (2), 333–374) to estimate peer effects in a linear‐in‐means model when individuals interact in groups. Assumingsufficient group size variation, this approach allows to control for correlated effects at the group level and to solve the simultaneity (reflection) problem. We clarify the intuition behind identification of peer effects in the model. We investigate peer effects in student achievement in French, Science, Mathematics and History in secondary schools in the Province of Québec (Canada). We estimate the model using conditional maximum likelihood and instrumental variables methods. We find some evidence of peer effects. The endogenous peer effect is large and significant in Mathematics but imprecisely estimated in the other subjects. Some contextual peer effects are also significant. In particular, for most subjects, the average age of peers has a negative effect on own test score. Using calibrated Monte Carlo simulations, we find that high dispersion in group sizes helps with potential issues of weak identification. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号