首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Much research studies US inflation history with a trend‐cycle model with unobserved components, where the trend may be viewed as the Fed's evolving inflation target or long‐horizon expected inflation. We provide a novel way to measure the slowly evolving trend and the cycle (or inflation gap), by combining inflation predictions from the Survey of Professional Forecasters (SPF) with realized inflation. The SPF forecasts may be treated either as rational expectations (RE) or updating according to a sticky information (SI) law of motion. We estimate RE and SI state‐space models with stochastic volatility on samples of consumer price index and gross national product/gross domestic product deflator inflation and the associated SPF inflation predictions using a particle Metropolis–Markov chain Monte Carlo sampler. The trend converges to 2% and its volatility declines over time—two tendencies largely complete by the late 1990s.  相似文献   

2.
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

3.
A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The Propp–Wilson algorithm requires this distribution to have a certain structure called monotonicity. In this paper an idea of Kendall is applied to show how the algorithm can be extended to the case where monotonicity is replaced by anti-monotonicity. As illustrating examples, simulations of the hard-core model and the random-cluster model are presented.  相似文献   

4.
This paper proposes two new panel unit root tests based on Zaykin et al. (2002) ’s truncated product method. The first one assumes constant correlation between P‐values and the second one uses sieve bootstrap to allow for general forms of cross‐section dependence in the panel units. Monte Carlo simulation shows that both tests have reasonably good size and are powerful in cases of some very large P‐values. The proposed tests are applied to a panel of real GDP and inflation density forecasts, resulting in evidence that professional forecasters may not update their forecast precision in an optimal Bayesian way.  相似文献   

5.
This paper presents analytical, Monte Carlo and empirical evidence concerning out-of-sample tests of Granger causality. The environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regression-based statistic suggested by Granger and Newbold [1977. Forecasting Economic Time Series. Academic Press Inc., London], a t-type statistic comparable to those suggested by Diebold and Mariano [1995, Comparing Predictive Accuracy. Journal of Business and Economic Statistics, 13, 253–263] and West [1996. Asymptotic Inference About Predictive Ability, Econometrica, 64, 1067–1084], and an F-type statistic akin to Theil's U. Since the asymptotic distributions under the null are nonstandard, tables of asymptotically valid critical values are provided. Monte Carlo evidence supports the theoretical results. An empirical example evaluates the predictive content of the Chicago Fed National Activity Index for growth in Industrial Production and core PCE-based inflation.  相似文献   

6.
This paper both narrowly and widely replicates the results of Anundsen et al. (Journal of Applied Econometrics, 2016, 31(7), 1291–1311). I am able to reproduce the same results as theirs. Furthermore, I find that allowing for time‐varying parameters of early warning system models can considerably improve the in‐sample model fit and out‐of‐sample forecasting performance based on an expanding window forecasting exercise.  相似文献   

7.
Following Hamilton [1989. A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica 57, 357–384], estimation of Markov regime-switching regressions typically relies on the assumption that the latent state variable controlling regime change is exogenous. We relax this assumption and develop a parsimonious model of endogenous Markov regime-switching. Inference via maximum likelihood estimation is possible with relatively minor modifications to existing recursive filters. The model nests the exogenous switching model, yielding straightforward tests for endogeneity. In Monte Carlo experiments, maximum likelihood estimates of the endogenous switching model parameters were quite accurate, even in the presence of certain model misspecifications. As an application, we extend the volatility feedback model of equity returns given in Turner et al. [1989. A Markov model of heteroskedasticity, risk, and learning in the stock market. Journal of Financial Economics 25, 3–22] to allow for endogenous switching.  相似文献   

8.
《Journal of econometrics》2005,127(2):165-178
This paper is concerned with the specification for modelling financial leverage effect in the context of stochastic volatility (SV) models. Two alternative specifications co-exist in the literature. One is the Euler approximation to the well-known continuous time SV model with leverage effect and the other is the discrete time SV model of Jacquier et al. (J. Econometrics 122 (2004) 185). Using a Gaussian nonlinear state space form with uncorrelated measurement and transition errors, I show that it is easy to interpret the leverage effect in the conventional model whereas it is not clear how to obtain and interpret the leverage effect in the model of Jacquier et al. Empirical comparisons of these two models via Bayesian Markov chain Monte Carlo (MCMC) methods further reveal that the specification of Jacquier et al. is inferior. Simulation experiments are conducted to study the sampling properties of Bayes MCMC for the conventional model.  相似文献   

9.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

10.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

11.
We construct a copula from the skew t distribution of Sahu et al. ( 2003 ). This copula can capture asymmetric and extreme dependence between variables, and is one of the few copulas that can do so and still be used in high dimensions effectively. However, it is difficult to estimate the copula model by maximum likelihood when the multivariate dimension is high, or when some or all of the marginal distributions are discrete‐valued, or when the parameters in the marginal distributions and copula are estimated jointly. We therefore propose a Bayesian approach that overcomes all these problems. The computations are undertaken using a Markov chain Monte Carlo simulation method which exploits the conditionally Gaussian representation of the skew t distribution. We employ the approach in two contemporary econometric studies. The first is the modelling of regional spot prices in the Australian electricity market. Here, we observe complex non‐Gaussian margins and nonlinear inter‐regional dependence. Accurate characterization of this dependence is important for the study of market integration and risk management purposes. The second is the modelling of ordinal exposure measures for 15 major websites. Dependence between websites is important when measuring the impact of multi‐site advertising campaigns. In both cases the skew t copula substantially outperforms symmetric elliptical copula alternatives, demonstrating that the skew t copula is a powerful modelling tool when coupled with Bayesian inference. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
In Markov-switching regression models, we use Kullback–Leibler (KL) divergence between the true and candidate models to select the number of states and variables simultaneously. Specifically, we derive a new information criterion, Markov switching criterion (MSC), which is an estimate of KL divergence. MSC imposes an appropriate penalty to mitigate the over-retention of states in the Markov chain, and it performs well in Monte Carlo studies with single and multiple states, small and large samples, and low and high noise. We illustrate the usefulness of MSC via applications to the U.S. business cycle and to media advertising.  相似文献   

13.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

14.
We develop a minimum amount of theory of Markov chains at as low a level of abstraction as possible in order to prove two fundamental probability laws for standard Markov chain Monte Carlo algorithms:
1. The law of large numbers explains why the algorithm works: it states that the empirical means calculated from the samples converge towards their "true" expected values, viz. expectations with respect to the invariant distribution of the associated Markov chain (=the target distribution of the simulation).
2. The central limit theorem expresses the deviations of the empirical means from their expected values in terms of asymptotically normally distributed random variables. We also present a formula and an estimator for the associated variance.  相似文献   

15.
In this paper, we introduce a threshold stochastic volatility model with explanatory variables. The Bayesian method is considered in estimating the parameters of the proposed model via the Markov chain Monte Carlo (MCMC) algorithm. Gibbs sampling and Metropolis–Hastings sampling methods are used for drawing the posterior samples of the parameters and the latent variables. In the simulation study, the accuracy of the MCMC algorithm, the sensitivity of the algorithm for model assumptions, and the robustness of the posterior distribution under different priors are considered. Simulation results indicate that our MCMC algorithm converges fast and that the posterior distribution is robust under different priors and model assumptions. A real data example was analyzed to explain the asymmetric behavior of stock markets.  相似文献   

16.
In their influential work Grier et al.(The asymmetric effects of uncertainty on inflation and output growth. Journal of Applied Econometrics 2004; 19 : 551–565) examine the effects of growth and inflation uncertainties on their average rates. The current study replicates their main results and performs a similar analysis on a more recent dataset. Their findings are confirmed to a large extent. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Abstract

We examine three tests of cross-sectional dependence and apply them to a Danish regional panel dataset with few time periods and a large cross-section: the CD test due to Pesaran (2004), the Schott test and the Liu–Lin–Shao test. We show that the CD test and the Schott test have good properties in a Monte Carlo study. When controlling for panel-specific and time-specific fixed effects, the Schott test is superior. Our application shows that there is cross-sectional dependence in regional employment growth across Danish regions. We also show that this dependence can be accounted for by time-specific fixed effects. Thus, the tests uncover new properties of the regional data.

RÉSUMÉ Nous examinons trois tests d'autonomie transversale, que nous appliquons à des ensembles de données d'une commission régionale danoise, avec peu de plages de temps et un vaste échantillons: le test d'autonomie transversale mené par Pesaran (2004), le test de Schott et le test Liu–Lin–Shao. Nous démontronsque le test d'autonomie transversale et le test de Schott présentent de bonnes propriétés dans uneétude Monte Carlo. Lorsquel'on se penchesur les effets propres au panel et les effets fixes en fonction du temps, le test de Schott dépasse les autres. Notre application illustre le dépendance transversale dans l'expansion de l'emploi à l’échelon régional dans les régions danoises, et nous démontronsque cette dépendance peut s'expliquer par des effets fixes en fonction du temps. De cette façon, le test révèle des propriétés nouvelles des données régionales.

RESUMEN Examinamos tres pruebas de dependencia transversal y las aplicamos a un conjunto de datos de panel regional en Dinamarca con pocos períodos de tiempo y una gran sección transversal: la prueba CD de Pesaran (2004), la prueba Schott y la prueba Liu–Lin–Shao. Demostramos que la prueba CD y la prueba Schott poseen buenas propiedades en un estudio de Monte Carlo. Al controlar los efectos fijos específicos de panel y específicos de tiempo, la prueba Schott es superior. Nuestra aplicación demuestra que existe una dependencia transversal en el crecimiento del empleo regional a través de las regiones danesas. Asimismo, demostramos que esta dependencia puede ser explicada mediante los efectos fijos específicos de tiempo. Por lo tanto, las pruebas dejan al descubierto nuevas propiedades de los datos regionales.

  相似文献   

18.
In a recent article Newey and Windmeijer (Generalized method of moments with many weak moment conditions. Econometrica 2009; 77 (3): 687–719) propose a new variance estimator for generalized empirical likelihood. In Monte Carlo examples they show that t‐statistics based on the new variance estimator have nearly correct size. I have replicated their Monte Carlo simulations and in addition used the new variance estimator to re‐estimate Angrist and Krueger's (Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics 1991; 106 (4): 979–1014) returns to education. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Heteroskedasticity-robust semi-parametric GMM estimation of a spatial model with space-varying coefficients. Spatial Economic Analysis. The spatial model with space-varying coefficients proposed by Sun et al. in 2014 has proved to be useful in detecting the location effects of the impacts of covariates as well as spatial interaction in empirical analysis. However, Sun et al.’s estimator is inconsistent when heteroskedasticity is present – a circumstance that is more realistic in certain applications. In this study, we propose a kind of semi-parametric generalized method of moments (GMM) estimator that is not only heteroskedasticity robust but also takes a closed form written explicitly in terms of observed data. We derive the asymptotic distributions of our estimators. Moreover, the results of Monte Carlo experiments show that the proposed estimators perform well in finite samples.  相似文献   

20.
In this paper, we introduce a Bayesian panel probit model with two flexible latent effects: first, unobserved individual heterogeneity that is allowed to vary in the population according to a nonparametric distribution; and second, a latent serially correlated common error component. In doing so, we extend the approach developed in Albert and Chib (Journal of the American Statistical Association 1993; 88 : 669–679; in Bayesian Biostatistics, Berry DA, Stangl DK (eds), Marcel Dekker: New York, 1996), and in Chib and Carlin (Statistics and Computing 1999; 9 : 17–26) by releasing restrictive parametric assumptions on the latent individual effect and eliminating potential spurious state dependence with latent time effects. The model is found to outperform more traditional approaches in an extensive series of Monte Carlo simulations. We then apply the model to the estimation of a patent equation using firm‐level data on research and development (R&D). We find a strong effect of technology spillovers on R&D but little evidence of product market spillovers, consistent with economic theory. The distribution of latent firm effects is found to have a multimodal structure featuring within‐industry firm clustering. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号