首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In applications of structural VAR modeling, finite-sample properties may be difficult to obtain when certain identifying restrictions are imposed on lagged relationships. As a result, even though imposing some lagged restrictions makes economic sense, lagged relationships are often left unrestricted to make statistical inference more convenient. This paper develops block Monte Carlo methods to obtain both maximum likelihood estimates and exact Bayesian inference when certain types of restrictions are imposed on the lag structure. These methods are applied to two examples to illustrate the importance of imposing restrictions on lagged relationships.  相似文献   

2.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

3.
In structural vector autoregressive (SVAR) analysis a Markov regime switching (MS) property can be exploited to identify shocks if the reduced form error covariance matrix varies across regimes. Unfortunately, these shocks may not have a meaningful structural economic interpretation. It is discussed how statistical and conventional identifying information can be combined. The discussion is based on a VAR model for the US containing oil prices, output, consumer prices and a short-term interest rate. The system has been used for studying the causes of the early millennium economic slowdown based on traditional identification with zero and long-run restrictions and using sign restrictions. We find that previously drawn conclusions are questionable in our framework.  相似文献   

4.
Many key macroeconomic and financial variables are characterized by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special cases. We show that the conventional rank statistics computed as in  and  are potentially unreliable. In particular, their large sample distributions depend on the integrated covariation of the underlying multivariate volatility process which impacts on both the size and power of the associated co-integration tests, as we demonstrate numerically. A solution to the identified inference problem is provided by considering wild bootstrap-based implementations of the rank tests. These do not require the practitioner to specify a parametric model for volatility, or to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. The bootstrap is shown to perform very well in practice.  相似文献   

5.
We consider how to estimate the trend and cycle of a time series, such as real gross domestic product, given a large information set. Our approach makes use of the Beveridge–Nelson decomposition based on a vector autoregression, but with two practical considerations. First, we show how to determine which conditioning variables span the relevant information by directly accounting for the Beveridge–Nelson trend and cycle in terms of contributions from different forecast errors. Second, we employ Bayesian shrinkage to avoid overfitting in finite samples when estimating models that are large enough to include many possible sources of information. An empirical application with up to 138 variables covering various aspects of the US economy reveals that the unemployment rate, inflation, and, to a lesser extent, housing starts, aggregate consumption, stock prices, real money balances, and the federal funds rate contain relevant information beyond that in output growth for estimating the output gap, with estimates largely robust to substituting some of these variables or incorporating additional variables.  相似文献   

6.
Sometimes forecasts of the original variable are of interest, even though a variable appears in logarithms (logs) in a system of time series. In that case, converting the forecast for the log of the variable to a naïve forecast of the original variable by simply applying the exponential transformation is not theoretically optimal. A simple expression for the optimal forecast under normality assumptions is derived. However, despite its theoretical advantages, the optimal forecast is shown to be inferior to the naïve forecast if specification and estimation uncertainty are taken into account. Hence, in practice, using the exponential of the log forecast is preferable to using the optimal forecast.  相似文献   

7.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

8.
This paper establishes the asymptotic distributions of the impulse response functions in panel vector autoregressions with a fixed time dimension. It also proves the asymptotic validity of a bootstrap approximation to their sampling distributions. The autoregressive parameters are estimated using the GMM estimators based on the first differenced equations and the error variance is estimated using an extended analysis-of-variance type estimator. Contrary to the time series setting, we find that the GMM estimator of the autoregressive coefficients is not asymptotically independent of the error variance estimator. The asymptotic dependence calls for variance correction for the orthogonalized impulse response functions. Simulation results show that the variance correction improves the coverage accuracy of both the asymptotic confidence band and the studentized bootstrap confidence band for the orthogonalized impulse response functions.  相似文献   

9.
10.
We study the effects of monetary policy on economic activity separately identifying the effects of a conventional change in the fed funds rate from the policy of forward guidance. We use a structural VAR identified using external instruments from futures market data. The response of output to a fed funds rate shock is found to be consistent with typical monetary VAR analyses. However, the effect of a forward guidance shock that increases long‐term interest rates has an expansionary effect on output. This counterintuitive response is shown to be tied to the asymmetric information between the Federal Reserve and the public.  相似文献   

11.
12.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

13.
This paper empirically examines the acquisition of a technology from a source outside the firm and its incorporation into a new or existing operational process. We refer to this key activity in process innovation as external technology integration. This paper develops a conceptual framework of external technology integration based on organizational information processing theory and technology management literature. The primary hypothesis underlying the conceptual framework is that external technology integration will be most successful when the level of interaction between the source of the technology and recipient of the technology is appropriately matched, or fit, to the characteristics of the technology to be integrated. The conceptual framework also develops other hypotheses relating to contextual factors that may also influence the success of external technology integration. A cross-sectional survey methodology is employed to test the four hypotheses of the conceptual framework, with the results indicating strong support for the fit hypothesis and general support for the contextual hypotheses. The paper closes with a discussion of the implications of this study for both theory and practice.  相似文献   

14.
We study identification in Bayesian proxy VARs for instruments that consist of sparse qualitative observations indicating the signs of shocks in specific periods. We propose the Fisher discriminant regression and a non-parametric sign concordance criterion as two alternative methods for achieving correct inference in this case. The former represents a minor deviation from a standard proxy VAR, whereas the non-parametric approach builds on set identification. Our application to US macroprudential policies finds persistent declines in credit volumes and house prices together with moderate declines in GDP and inflation and a widening of corporate bond spreads after a tightening of capital requirements or mortgage underwriting standards.  相似文献   

15.
Quality & Quantity - This study assesses the impact of external shocks on select small open economies (SOEs) using the Bayesian variant of the global vector autoregression model with time...  相似文献   

16.
17.
Time series data arise in many medical and biological imaging scenarios. In such images, a time series is obtained at each of a large number of spatially dependent data units. It is interesting to organize these data into model‐based clusters. A two‐stage procedure is proposed. In stage 1, a mixture of autoregressions (MoAR) model is used to marginally cluster the data. The MoAR model is fitted using maximum marginal likelihood (MMaL) estimation via a minorization–maximization (MM) algorithm. In stage 2, a Markov random field (MRF) model induces a spatial structure onto the stage 1 clustering. The MRF model is fitted using maximum pseudolikelihood (MPL) estimation via an MM algorithm. Both the MMaL and MPL estimators are proved to be consistent. Numerical properties are established for both MM algorithms. A simulation study demonstrates the performance of the two‐stage procedure. An application to the segmentation of a zebrafish brain calcium image is presented.  相似文献   

18.
This paper gives a test of overidentifying restrictions that is robust to many instruments and heteroskedasticity. It is based on a jackknife version of the overidentifying test statistic. Correct asymptotic critical values are derived for this statistic when the number of instruments grows large, at a rate up to the sample size. It is also shown that the test is valid when the number of instruments is fixed and there is homoskedasticity. This test improves on recently proposed tests by allowing for heteroskedasticity and by avoiding assumptions on the instrument projection matrix. This paper finds in Monte Carlo studies that the test is more accurate and less sensitive to the number of instruments than the Hausman–Sargan or GMM tests of overidentifying restrictions.  相似文献   

19.
We establish explicit socially optimal rules for an irreversible investment decision with time-to-build and uncertainty. Assuming a price sensitive demand function with a random intercept, we provide comparative statics and economic interpretations for three models of demand (arithmetic Brownian, geometric Brownian, and the Cox–Ingersoll–Ross). Committed capacity, that is, the installed capacity plus the investment in the pipeline, must never drop below the best predictor of future demand, minus two biases. The discounting bias takes into account the fact that investment is paid upfront for future use; the precautionary bias multiplies a type of risk aversion index by the local volatility. Relying on the analytical forms, we discuss in detail the economic effects. For example, the impact of volatility on the optimal investment is negligible in some cases. It vanishes in the CIR model for long delays, and in the GBM model for high discount rates.  相似文献   

20.
We assess whether the effects of fiscal policy depend on the extent of uncertainty in the economy. Focusing on tax shocks, identified by the narrative series by Romer and Romer (American Economic Review, 2010, 100(3), 763‐801), and various measures of uncertainty, we use a Threshold VAR model to allow for dependence of the effects of the tax shocks on both the level of uncertainty and the sign of the shock. We find that the economy responds more positively to tax cuts during periods of low uncertainty, while, in response to tax increases, the response of main aggregates is more negative in more uncertain times. We argue that controlling for monetary policy in fiscal VARs is important to avoid omitted variable bias. We interpret our empirical evidence in light of existing theoretical contributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号