首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos [Spanos, Aris, 1990. The simultaneous-equations model revisited: Statistical adequacy and identification. Journal of Econometrics 44, 87–105] to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE–VAR(λ)(λ), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE–FAVAR can be an optimal forecasting model.  相似文献   

2.
The objective of this paper is to integrate the generalized gamma (GG)(GG) distribution into the information theoretic literature. We study information properties of the GGGG distribution and provide an assortment of information measures for the GGGG family, which includes the exponential, gamma, Weibull, and generalized normal distributions as its subfamilies. The measures include entropy representations of the log-likelihood ratio, AIC, and BIC, discriminating information between GGGG and its subfamilies, a minimum discriminating information function, power transformation information, and a maximum entropy index of fit to histogram. We provide the full parametric Bayesian inference for the discrimination information measures. We also provide Bayesian inference for the fit of GGGG model to histogram, using a semi-parametric Bayesian procedure, referred to as the maximum entropy Dirichlet (MED). The GGGG information measures are computed for duration of unemployment and duration of CEO tenure.  相似文献   

3.
We show that the distribution of any portfolio whose components jointly follow a location–scale mixture of normals can be characterised solely by its mean, variance and skewness. Under this distributional assumption, we derive the mean–variance–skewness frontier in closed form, and show that it can be spanned by three funds. For practical purposes, we derive a standardised distribution, provide analytical expressions for the log-likelihood score and explain how to evaluate the information matrix. Finally, we present an empirical application in which we obtain the mean–variance–skewness frontier generated by the ten Datastream US sectoral indices, and conduct spanning tests.  相似文献   

4.
    
This paper examines the usefulness of a more refined business cycle classification for monthly industrial production (IP), beyond the usual distinction between expansions and contractions. Univariate Markov-switching models show that a three regime model is more appropriate than a model with only two regimes. Interestingly, the third regime captures ‘severe recessions’, contrasting the conventional view that the additional third regime represents a ‘recovery’ phase. This is confirmed by means of Markov-switching vector autoregressive models that allow for phase shifts between the cyclical regimes of IP and the Conference Board's Leading Economic Index (LEI). The timing of the severe recession regime mostly corresponds with periods of substantial financial market distress and severe credit squeezes, providing empirical evidence for the ‘financial accelerator’ theory.  相似文献   

5.
    
Ploberger and Phillips (Econometrica, Vol. 71, pp. 627–673, 2003) proved a result that provides a bound on how close a fitted empirical model can get to the true model when the model is represented by a parameterized probability measure on a finite dimensional parameter space. The present note extends that result to cases where the parameter space is infinite dimensional. The results have implications for model choice in infinite dimensional problems and highlight some of the difficulties, including technical difficulties, presented by models of infinite dimension. Some implications for forecasting are considered and some applications are given, including the empirically relevant case of vector autoregression (VAR) models of infinite order.  相似文献   

6.
    
Cross‐sectional distribution of per capita (log) GDP across the European Union regions from 1977 to 1996 is analysed. Kernel density estimates reveal a multimodal structure of the distribution during the 1970s and early 1980s, and a tendency towards unimodality since the mid‐1980s. The distribution is further analysed by a mixture of normal densities. A two well‐separated component mixture fits the distributions in the 1970s and early 1980s. These two clusters tend to converge, supporting the idea of a process of catching up. In the mid‐1990s, a small group of very rich regions is generated by a separated component.  相似文献   

7.
    
This article proposes a Bayesian approach to examining money‐output causality within the context of a logistic smooth transition vector error correction model. Our empirical results provide substantial evidence that the postwar US money‐output relationship is nonlinear, with regime changes mainly governed by the output growth and price levels. Furthermore, we obtain strong support for nonlinear Granger causality from money to output, although there is also some evidence for models indicating that money is not Granger causal or long‐run causal to output.  相似文献   

8.
A novel Bayesian method for inference in dynamic regression models is proposed where both the values of the regression coefficients and the importance of the variables are allowed to change over time. We focus on forecasting and so the parsimony of the model is important for good performance. A prior is developed which allows the shrinkage of the regression coefficients to suitably change over time and an efficient Markov chain Monte Carlo method for posterior inference is described. The new method is applied to two forecasting problems in econometrics: equity premium prediction and inflation forecasting. The results show that this method outperforms current competing Bayesian methods.  相似文献   

9.
In this paper, we develop methods for estimation and forecasting in large time-varying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints, we draw on ideas from the dynamic model averaging literature which achieve reductions in the computational burden through the use forgetting factors. We then extend the TVP-VAR so that its dimension can change over time. For instance, we can have a large TVP-VAR as the forecasting model at some points in time, but a smaller TVP-VAR at others. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output and interest rates demonstrates the feasibility and usefulness of our approach.  相似文献   

10.
We introduce two estimators for estimating the Marginal Data Density (MDD) from the Gibbs output. Our methods are based on exploiting the analytical tractability condition, which requires that some parameter blocks can be analytically integrated out from the conditional posterior densities. This condition is satisfied by several widely used time series models. An empirical application to six-variate VAR models shows that the bias of a fully computational estimator is sufficiently large to distort the implied model rankings. One of the estimators is fast enough to make multiple computations of MDDs in densely parameterized models feasible.  相似文献   

11.
This paper proposes an empirical Bayes approach for Markov switching autoregressions that can constrain some of the state-dependent parameters (regression coefficients and error variances) to be approximately equal across regimes. By flexibly reducing the dimension of the parameter space, this can help to ensure regime separation and to detect the Markov switching nature of the data. The permutation sampler with a hierarchical prior is used for choosing the prior moments, the identification constraint, and the parameters governing prior state dependence. The empirical relevance of the methodology is illustrated with an application to quarterly and monthly real interest rate data.  相似文献   

12.
Maximization of utility implies that consumer demand systems have a Slutsky matrix which is everywhere symmetric. However, previous non- and semi-parametric approaches to the estimation of consumer demand systems do not give estimators that are restricted to satisfy this condition, nor do they offer powerful tests of this restriction. We use nonparametric modeling to test and impose Slutsky symmetry in a system of expenditure share equations over prices and expenditure. In this context, Slutsky symmetry is a set of nonlinear cross-equation restrictions on levels and derivatives of consumer demand equations. The key insight is that due to the differing convergence rates of levels and derivatives and due to the fact that the symmetry restrictions are linear in derivatives, both the test and the symmetry restricted estimator behave asymptotically as if these restrictions were (locally) linear. We establish large and finite sample properties of our methods, and show that our test has advantages over the only other comparable test. All methods we propose are implemented with Canadian micro-data. We find that our nonparametric analysis yields statistically significantly and qualitatively different results from traditional parametric estimators and tests.  相似文献   

13.
    
We propose a nonparametric likelihood ratio testing procedure for choosing between a parametric (likelihood) model and a moment condition model when both models could be misspecified. Our procedure is based on comparing the Kullback–Leibler Information Criterion (KLIC) between the parametric model and moment condition model. We construct the KLIC for the parametric model using the difference between the parametric log likelihood and a sieve nonparametric estimate of population entropy, and obtain the KLIC for the moment model using the empirical likelihood statistic. We also consider multiple (>2)(>2) model comparison tests, when all the competing models could be misspecified, and some models are parametric while others are moment-based. We evaluate the performance of our tests in a Monte Carlo study, and apply the tests to an example from industrial organization.  相似文献   

14.
15.
A new test for non-linearity in the conditional mean is proposed using functions of the principal components of regressors. The test extends the non-linearity tests based on Kolmogorov–Gabor polynomials (,  and ), but circumvents problems of high dimensionality, is equivariant to collinearity, and includes exponential functions, so is a portmanteau test with power against a wide range of possible alternatives. A Monte Carlo analysis compares the performance of the test to the optimal infeasible test and to alternative tests. The relative performance of the test is encouraging: the test has the appropriate size and has high power in many situations.  相似文献   

16.
We consider model selection facing uncertainty over the choice of variables and the occurrence and timing of multiple location shifts. General-to-simple selection is extended by adding an impulse indicator for every observation to the set of candidate regressors: see Johansen and Nielsen (2009). We apply that approach to a fat-tailed distribution, and to processes with breaks: Monte Carlo experiments show its capability of detecting up to 20 shifts in 100 observations, while jointly selecting variables. An illustration to US real interest rates compares impulse-indicator saturation with the procedure in Bai and Perron (1998).  相似文献   

17.
    
Motivated by the common finding that linear autoregressive models often forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but a subset of the coefficients is treated as being local‐to‐zero. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive mean square error‐minimizing weights for combining the restricted and unrestricted forecasts. Monte Carlo and empirical analyses verify the practical effectiveness of our combination approach.  相似文献   

18.
Model averaging by jackknife criterion in models with dependent data   总被引:1,自引:0,他引:1  
The past decade witnessed a literature on model averaging by frequentist methods. For the most part, the asymptotic optimality of various existing frequentist model averaging estimators has been established under i.i.d. errors. Recently, Hansen and Racine [Hansen, B.E., Racine, J., 2012. Jackknife model averaging. Journal of Econometrics 167, 38–46] developed a jackknife model averaging (JMA) estimator, which has an important advantage over its competitors in that it achieves the lowest possible asymptotic squared error under heteroscedastic errors. In this paper, we broaden Hansen and Racine’s scope of analysis to encompass models with (i) a non-diagonal error covariance structure, and (ii) lagged dependent variables, thus allowing for dependent data. We show that under these set-ups, the JMA estimator is asymptotically optimal by a criterion equivalent to that used by Hansen and Racine. A Monte Carlo study demonstrates the finite sample performance of the JMA estimator in a variety of model settings.  相似文献   

19.
    
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

20.
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号