首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Phenomena such as the Great Moderation have increased the attention of macroeconomists towards models where shock processes are not (log-)normal. This paper studies a class of discrete-time rational expectations models where the variance of exogenous innovations is subject to stochastic regime shifts. We first show that, up to a second-order approximation using perturbation methods, regime switching in the variances has an impact only on the intercept coefficients of the decision rules. We then demonstrate how to derive the exact model likelihood for the second-order approximation of the solution when there are as many shocks as observable variables. We illustrate the applicability of the proposed solution and estimation methods in the case of a small DSGE model.  相似文献   

2.
We prove that the undetermined Taylor series coefficients of local approximations to the policy function of arbitrary order in a wide class of discrete time dynamic stochastic general equilibrium (DSGE) models are solvable by standard DSGE perturbation methods under regularity and saddle point stability assumptions on first order approximations. Extending the approach to nonstationary models, we provide necessary and sufficient conditions for solvability, as well as an example in the neoclassical growth model where solvability fails. Finally, we eliminate the assumption of solvability needed for the local existence theorem of perturbation solutions, complete the proof that the policy function is invariant to first order changes in risk, and attribute the loss of numerical accuracy in progressively higher order terms to the compounding of errors from the first order transition matrix.  相似文献   

3.
This paper derives a fifth-order perturbation solution to DSGE models. The paper develops a new notation that reduces the notational complexity of high-order solutions and yields a faster code. The new notation consists of new matrix forms of high-order multivariate chain rules and a new representation of the model as a function of one vector variable. The algorithm that implements the new notation is between 3 and 55 times faster than Dynare++, depending on model size and solution order.  相似文献   

4.
Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos [Spanos, Aris, 1990. The simultaneous-equations model revisited: Statistical adequacy and identification. Journal of Econometrics 44, 87–105] to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE–VAR(λ)(λ), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE–FAVAR can be an optimal forecasting model.  相似文献   

5.
In this study, we conducted an oil prices forecasting competition among a set of structural models, including vector autoregression and dynamic stochastic general equilibrium (DSGE) models. Our results highlight two principles. First, forecasts should exploit the fact that real oil prices are mean reverting over long horizons. Second, models should not replicate the high volatility of the oil prices observed in samples. By following these principles, we show that an oil sector DSGE model performs much better at real oil price forecasting than random walk or vector autoregression.  相似文献   

6.
We propose a nonlinear infinite moving average as an alternative to the standard state space policy function for solving nonlinear DSGE models. Perturbation of the nonlinear moving average policy function provides a direct mapping from a history of innovations to endogenous variables, decomposes the contributions from individual orders of uncertainty and nonlinearity, and enables familiar impulse response analysis in nonlinear settings. When the linear approximation is saddle stable and free of unit roots, higher order terms are likewise saddle stable and first order corrections for uncertainty are zero. We derive the third order approximation explicitly, examine the accuracy of the method using Euler equation tests, and compare with state space approximations.  相似文献   

7.
Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either directly on deep parameters' values (‘microprior’) or indirectly, on macroeconomic indicators, e.g. moments of observable variables (‘macroprior’). We introduce a non-parametric macroprior which is elicited from impulse response functions and assess its performance in shaping posterior estimates. We find that using a macroprior can lead to substantially different posterior estimates. We probe into the details of our result, showing that model misspecification is likely to be responsible of that. In addition, we assess to what extent the use of macropriors is impaired by the need of calibrating some hyperparameters.  相似文献   

8.
Noisy rational expectations models, in which agents have dispersed private information and extract information from an endogenous asset price, are widely used in finance. However, these linear partial equilibrium models do not fit well in modern macroeconomics that is based on non-linear dynamic general equilibrium models. We develop a method for solving a DSGE model with portfolio choice and dispersed private information. We combine and extend existing local approximation methods applied to public information DSGE settings with methods for solving noisy rational expectations models in finance with dispersed private information.  相似文献   

9.
This paper evaluates the common practice of estimating dynamic stochastic general equilibrium (DSGE) models using seasonally adjusted data. The simulation experiment shows that the practice leads to sizable distortions in estimated parameters. This is because the effects of seasonality, which are magnified by the model’s capital accumulation and labor market frictions, are not restricted to the so-called seasonal frequencies but instead are propagated across the entire frequency domain.  相似文献   

10.
Orthogonal polynomials can be used to modify the moments of the distribution of a random variable. In this paper, polynomially adjusted distributions are employed to model the skewness and kurtosis of the conditional distributions of GARCH models. To flexibly capture the skewness and kurtosis of data, the distributions of the innovations that are polynomially reshaped include, besides the Gaussian, also leptokurtic laws such as the logistic and the hyperbolic secant. Modeling GARCH innovations with polynomially adjusted distributions can effectively improve the precision of the forecasts. This strategy is analyzed in GARCH models with different specifications for the conditional variance, such as the APARCH, the EGARCH, the Realized GARCH, and APARCH with time-varying skewness and kurtosis. An empirical application on different types of asset returns shows the good performance of these models in providing accurate forecasts according to several criteria based on density forecasting, downside risk, and volatility prediction.  相似文献   

11.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following  and  and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner.  相似文献   

12.
We improve the accuracy and speed of particle filtering for non-linear DSGE models with potentially non-normal shocks. This is done by introducing a new proposal distribution which (i) incorporates information from new observables and (ii) has a small optimization step that minimizes the distance to the optimal proposal distribution. A particle filter with this proposal distribution is shown to deliver a high level of accuracy even with relatively few particles, and it is therefore much more efficient than the standard particle filter.  相似文献   

13.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

14.
This paper explores the application of the changes of variables technique to solve the stochastic neoclassical growth model. We use the method of Judd [2003. Perturbation methods with nonlinear changes of variables. Mimeo, Hoover Institution] to change variables in the computed policy functions that characterize the behavior of the economy. We report how the optimal change of variables reduces the average absolute Euler equation errors of the solution of the model by a factor of three. We also demonstrate how changes of variables correct for variations in the volatility of the economy even if we work with first-order policy functions and how we can keep a linear representation of the laws of motion of the model if we use a nearly optimal transformation. We discuss how to apply our results to estimate dynamic equilibrium economies.  相似文献   

15.
DSGE pileups     
The sampling distribution of estimators for DSGE structural parameters tends to be non-normal and/or pile up on the boundary of the theoretically admissible parameter space. This calls into question both the reliability of asymptotic approximations and the presumption of correct specification. This paper seeks to develop a conceptual framework for understanding how these phenomena arise, and to provide pragmatic methods for dealing with them in practice. The results are presented in three examples and a medium scale DSGE model.  相似文献   

16.
The practical relevance of several concepts of exogeneity of treatments for the estimation of causal parameters based on observational data are discussed. We show that the traditional concepts, such as strong ignorability and weak and super-exogeneity, are too restrictive if interest lies in average effects (i.e. not on distributional effects of the treatment). We suggest a new definition of exogeneity, KL-exogeneity. It does not rely on distributional assumptions and is not based on counterfactual random variables. As a consequence it can be empirically tested using a proposed test that is simple to implement and is distribution-free.  相似文献   

17.
This paper considers multiple regression procedures for analyzing the relationship between a response variable and a vector of d covariates in a nonparametric setting where tuning parameters need to be selected. We introduce an approach which handles the dilemma that with high dimensional data the sparsity of data in regions of the sample space makes estimation of nonparametric curves and surfaces virtually impossible. This is accomplished by abandoning the goal of trying to estimate true underlying curves and instead estimating measures of dependence that can determine important relationships between variables. These dependence measures are based on local parametric fits on subsets of the covariate space that vary in both dimension and size within each dimension. The subset which maximizes a signal to noise ratio is chosen, where the signal is a local estimate of a dependence parameter which depends on the subset dimension and size, and the noise is an estimate of the standard error (SE) of the estimated signal. This approach of choosing the window size to maximize a signal to noise ratio lifts the curse of dimensionality because for regions with sparsity of data the SE is very large. It corresponds to asymptotically maximizing the probability of correctly finding nonspurious relationships between covariates and a response or, more precisely, maximizing asymptotic power among a class of asymptotic level αt-tests indexed by subsets of the covariate space. Subsets that achieve this goal are called features. We investigate the properties of specific procedures based on the preceding ideas using asymptotic theory and Monte Carlo simulations and find that within a selected dimension, the volume of the optimally selected subset does not tend to zero as n → ∞ unless the volume of the subset of the covariate space where the response depends on the covariate vector tends to zero.  相似文献   

18.
The future revision of capital requirements and a market-consistent valuation of non-hedgeable liabilities lead to an increasing attention on forecasting longevity trends. In this field, many methodologies focus on either modeling mortality or pricing mortality-linked securities (as longevity bonds). Following Lee–Carter method (proposed in 1992), actuarial literature has provided several extensions in order to consider different trends observed in European data set (e.g., the cohort effect). The purpose of the paper is to compare the features of main mortality models proposed over the years. Model selection became indeed a primary task with the aim to identify the “best” model. What is meant by best is controversial, but good selection techniques are usually based on a good balance between goodness of fit and simplicity. In this regard, different criteria, mainly based on residual and projected rates analysis, are here used. For the sake of comparison, main forecasting methods have been applied to deaths and exposures to risk of male Italian population. Weaknesses and strengths have been emphasized, by underlying how various models provide a different goodness of fit according to different data sets. At the same time, the quality and the variability of forecasted rates have been compared by evaluating the effect on annuity values. Results confirm that some models perform better than others, but no single model can be defined as the best method.  相似文献   

19.
20.
《Journal of econometrics》2003,114(2):349-360
Both volatility clustering and conditional non-normality can induce the leptokurtosis typically observed in financial data. In this paper, the exact representation of kurtosis is derived for both GARCH and stochastic volatility models when innovations may be conditionally non-normal. We find that, for both models, the volatility clustering and non-normality contribute interactively and symmetrically to the overall kurtosis of the series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号