首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a novel approach for dealing with the ‘curse of dimensionality’ in the case of infinite-dimensional vector autoregressive (IVAR) models. It is assumed that each unit or variable in the IVAR is related to a small number of neighbors and a large number of non-neighbors. The neighborhood effects are fixed and do not change with the number of units (N), but the coefficients of non-neighboring units are restricted to vanish in the limit as N tends to infinity. Problems of estimation and inference in a stationary IVAR model with an unknown number of unobserved common factors are investigated. A cross-section augmented least-squares (CALS) estimator is proposed and its asymptotic distribution is derived. Satisfactory small-sample properties are documented by Monte Carlo experiments. An empirical illustration shows the statistical significance of dynamic spillover effects in modeling of US real house prices across the neighboring states.  相似文献   

2.
When Japanese short-term bond yields were near their zero bound, yields on long-term bonds showed substantial fluctuation, and there was a strong positive relationship between the level of interest rates and yield volatilities/risk premiums. We explore whether several families of dynamic term structure models that enforce a zero lower bound on short rates imply conditional distributions of Japanese bond yields consistent with these patterns. Multi-factor “shadow-rate” and quadratic-Gaussian models, evaluated at their maximum likelihood estimates, capture many features of the data. Furthermore, model-implied risk premiums track realized excess returns during extended periods of near-zero short rates. In contrast, the conditional distributions implied by non-negative affine models do not match their sample counterparts, and standard Gaussian affine models generate implausibly large negative risk premiums.  相似文献   

3.
In this paper, we develop methods for estimation and forecasting in large time-varying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints, we draw on ideas from the dynamic model averaging literature which achieve reductions in the computational burden through the use forgetting factors. We then extend the TVP-VAR so that its dimension can change over time. For instance, we can have a large TVP-VAR as the forecasting model at some points in time, but a smaller TVP-VAR at others. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output and interest rates demonstrates the feasibility and usefulness of our approach.  相似文献   

4.
The aim of this paper is to complement the minimum distance estimation–structural vector autoregression approach when the weighting matrix is not optimal. In empirical studies, this choice is motivated by stochastic singularity or collinearity problems associated with the covariance matrix of impulse response functions. Consequently, the asymptotic distribution cannot be used to test the economic model's fit. To circumvent this difficulty, we propose a simple simulation method to construct critical values for the test statistics. An empirical application with US data illustrates the proposed method.  相似文献   

5.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

6.
In this paper I propose an alternative to calibration of linearized singular dynamic stochastic general equilibrium models. Given an a-theoretical econometric model as a representative of the data generating process, I will construct an information measure which compares the conditional distribution of the econometric model variables with the corresponding singular conditional distribution of the theoretical model variables. The singularity problem will be solved by using convolutions of both distributions with a non-singular distribution. This information measure will then be maximized to the deep parameters of the theoretical model, which links these parameters to the parameters of the econometric model and provides an alternative to calibration. This approach will be illustrated by an application to a linearized version of the stochastic growth model of King, Plosser and Rebelo.  相似文献   

7.
Inference for multiple-equation Markov-chain models raises a number of difficulties that are unlikely to appear in smaller models. Our framework allows for many regimes in the transition matrix, without letting the number of free parameters grow as the square as the number of regimes, but also without losing a convenient form for the posterior distribution. Calculation of marginal data densities is difficult in these high-dimensional models. This paper gives methods to overcome these difficulties, and explains why existing methods are unreliable. It makes suggestions for maximizing posterior density and initiating MCMC simulations that provide robustness against the complex likelihood shape.  相似文献   

8.
Skepticism toward traditional identifying assumptions based on exclusion restrictions has led to a surge in the use of structural VAR models in which structural shocks are identified by restricting the sign of the responses of selected macroeconomic aggregates to these shocks. Researchers commonly report the vector of pointwise posterior medians of the impulse responses as a measure of central tendency of the estimated response functions, along with pointwise 68% posterior error bands. It can be shown that this approach cannot be used to characterize the central tendency of the structural impulse response functions. We propose an alternative method of summarizing the evidence from sign-identified VAR models designed to enhance their practical usefulness. Our objective is to characterize the most likely admissible model(s) within the set of structural VAR models that satisfy the sign restrictions. We show how the set of most likely structural response functions can be computed from the posterior mode of the joint distribution of admissible models both in the fully identified and in the partially identified case, and we propose a highest-posterior density credible set that characterizes the joint uncertainty about this set. Our approach can also be used to resolve the long-standing problem of how to conduct joint inference on sets of structural impulse response functions in exactly identified VAR models. We illustrate the differences between our approach and the traditional approach for the analysis of the effects of monetary policy shocks and of the effects of oil demand and oil supply shocks.  相似文献   

9.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

10.
Cointegration ideas as introduced by Granger in 1981 are commonly embodied in empirical macroeconomic modelling through the vector error correction model (VECM). It has become common practice in these models to treat some variables as weakly exogenous, resulting in conditional VECMs. This paper studies the consequences of different approaches to weak exogeneity for the dynamic properties of such models, in the context of two models of the UK economy, one a national-economy model, the other the UK submodel of a global model. Impulse response and common trend analyses are shown to be sensitive to these assumptions and other specification choices.  相似文献   

11.
In this paper, a new model to analyze the comovements in the volatilities of a portfolio is proposed. The Pure Variance Common Features model is a factor model for the conditional variances of a portfolio of assets, designed to isolate a small number of variance features that drive all assets’ volatilities. It decomposes the conditional variance into a short-run idiosyncratic component (a low-order ARCH process) and a long-run component (the variance factors). An empirical example provides evidence that models with very few variance features perform well in capturing the long-run common volatilities of the equity components of the Dow Jones.  相似文献   

12.
Relationships between the Federal funds rate, unemployment, inflation and the long‐term bond rate are investigated with cointegration techniques. We find a stable long‐term relationship between the Federal funds rate, unemployment and the bond rate. This relationship is interpretable as a policy target because deviations are corrected via the Federal funds rate. Deviations of the actual Federal funds rate from the estimated target give simple indications of discretionary monetary policy, and the larger deviations relate to special episodes outside the current information set. A more traditional Taylor‐type target, where inflation appears instead of the bond rate, does not seem congruent with the data.  相似文献   

13.
This paper addresses the issue of testing the ‘hybrid’ New Keynesian Phillips curve (NKPC) through vector autoregressive (VAR) systems and likelihood methods, giving special emphasis to the case where the variables are non‐stationary. The idea is to use a VAR for both the inflation rate and the explanatory variable(s) to approximate the dynamics of the system and derive testable restrictions. Attention is focused on the ‘inexact’ formulation of the NKPC. Empirical results over the period 1971–98 show that the NKPC is far from providing a ‘good first approximation’ of inflation dynamics in the Euro area.  相似文献   

14.
Business cycle analyses have proved to be helpful to practitioners in assessing current economic conditions and anticipating upcoming fluctuations. In this article, we focus on the acceleration cycle in the euro area, namely the peaks and troughs of the growth rate which delimit the slowdown and acceleration phases of the economy. Our aim is twofold: first, we put forward a reference turning point chronology of this cycle on a monthly basis, based on gross domestic product and industrial production indices. We consider both euro area aggregate level and country‐specific cycles for the six main countries of the zone. Second, we come up with a new turning point indicator, based on business surveys carefully watched by central banks and short‐term analysts, to follow in real‐time the fluctuations of the acceleration cycle.  相似文献   

15.
We give an appraisal of the New Keynesian Phillips curve (NPCM) as an empirical model of European inflation. The favourable evidence for NPCMs on euro‐area data reported in earlier studies is shown to depend on specific choices made about estimation methodology. The NPCM can be re‐interpreted as a highly restricted equilibrium correction model. We also report the outcome of tests based on variable addition and encompassing of existing models. The results show that economists should not accept the NPCM too readily.  相似文献   

16.
We decompose the squared VIX index, derived from US S&P500 options prices, into the conditional variance of stock returns and the equity variance premium. We evaluate a plethora of state-of-the-art volatility forecasting models to produce an accurate measure of the conditional variance. We then examine the predictive power of the VIX and its two components for stock market returns, economic activity and financial instability. The variance premium predicts stock returns while the conditional stock market variance predicts economic activity and has a relatively higher predictive power for financial instability than does the variance premium.  相似文献   

17.
This paper introduces a new class of multivariate volatility models which is easy to estimate using covariance targeting, even with rich dynamics. We call them rotated ARCH (RARCH) models. The basic structure is to rotate the returns and then to fit them using a BEKK-type parameterization of the time-varying covariance whose long-run covariance is the identity matrix. This yields the rotated BEKK (RBEKK) model. The extension to DCC-type parameterizations is given, introducing the rotated DCC (RDCC) model. Inference for these models is computationally attractive, and the asymptotics are standard. The techniques are illustrated using data on the DJIA stocks.  相似文献   

18.
This paper presents necessary and sufficient conditions for the existence of common cyclical features in Vector Auto Regressive (VAR) processes integrated of order 0, 1, 2, where the common cyclical features correspond to common serial correlation (CS), commonality in the final equations (CE) and co-dependence (CD). The results are based on local rank factorizations of the reversed AR polynomial around the poles of its inverse. All processes with CS structures are found to present also CE structures and vice versa. The presence of CD structures, instead, implies the presence of both CS and CE structures, but not vice versa. Characterizations of the CS, CE, CD linear combinations are given in terms of linear subspaces defined in the local rank factorizations.  相似文献   

19.
During the past two decades, innovations protected by patents have played a key role in business strategies. This fact enhanced studies of the determinants of patents and the impact of patents on innovation and competitive advantage. Sustaining competitive advantages is as important as creating them. Patents help sustaining competitive advantages by increasing the production cost of competitors, by signaling a better quality of products and by serving as barriers to entry. If patents are rewards for innovation, more R&D should be reflected in more patent applications but this is not the end of the story. There is empirical evidence showing that patents through time are becoming easier to get and more valuable to the firm due to increasing damage awards from infringers. These facts question the constant and static nature of the relationship between R&D and patents. Furthermore, innovation creates important knowledge spillovers due to its imperfect appropriability. Our paper investigates these dynamic effects using US patent data from 1979 to 2000 with alternative model specifications for patent counts. We introduce a general dynamic count panel data model with dynamic observable and unobservable spillovers, which encompasses previous models, is able to control for the endogeneity of R&D and therefore can be consistently estimated by maximum likelihood. Apart from allowing for firm specific fixed and random effects, we introduce a common unobserved component, or secret stock of knowledge, that affects differently the propensity to patent of each firm across sectors due to their different absorptive capacity.  相似文献   

20.
We present a new approach to trend/cycle decomposition of time series that follow regime-switching processes. The proposed approach, which we label the “regime-dependent steady-state” (RDSS) decomposition, is motivated as the appropriate generalization of the Beveridge and Nelson decomposition [Beveridge, S., Nelson, C.R., 1981. A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle. Journal of Monetary Economics 7, 151–174] to the setting where the reduced-form dynamics of a given series can be captured by a regime-switching forecasting model. For processes in which the underlying trend component follows a random walk with possibly regime-switching drift, the RDSS decomposition is optimal in a minimum mean-squared-error sense and is more broadly applicable than directly employing an Unobserved Components model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号