首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The popular Nelson–Siegel [Nelson, C.R., Siegel, A.F., 1987. Parsimonious modeling of yield curves. Journal of Business 60, 473–489] yield curve is routinely fit to cross sections of intra-country bond yields, and Diebold–Li [Diebold, F.X., Li, C., 2006. Forecasting the term structure of government bond yields. Journal of Econometrics 130, 337–364] have recently proposed a dynamized version. In this paper we extend Diebold–Li to a global context, modeling a potentially large set of country yield curves in a framework that allows for both global and country-specific factors. In an empirical analysis of term structures of government bond yields for the Germany, Japan, the UK and the US, we find that global yield factors do indeed exist and are economically important, generally explaining significant fractions of country yield curve dynamics, with interesting differences across countries.  相似文献   

2.
It has been documented that random walk outperforms most economic structural and time series models in out-of-sample forecasts of the conditional mean dynamics of exchange rates. In this paper, we study whether random walk has similar dominance in out-of-sample forecasts of the conditional probability density of exchange rates given that the probability density forecasts are often needed in many applications in economics and finance. We first develop a nonparametric portmanteau test for optimal density forecasts of univariate time series models in an out-of-sample setting and provide simulation evidence on its finite sample performance. Then we conduct a comprehensive empirical analysis on the out-of-sample performances of a wide variety of nonlinear time series models in forecasting the intraday probability densities of two major exchange rates—Euro/Dollar and Yen/Dollar. It is found that some sophisticated time series models that capture time-varying higher order conditional moments, such as Markov regime-switching models, have better density forecasts for exchange rates than random walk or modified random walk with GARCH and Student-t innovations. This finding dramatically differs from that on mean forecasts and suggests that sophisticated time series models could be useful in out-of-sample applications involving the probability density.  相似文献   

3.
We present a new specification for the multinomial multiperiod probit model with autocorrelated errors. In sharp contrast with commonly used specifications, ours is invariant with respect to the choice of a baseline alternative for utility differencing. It also nests these standard models as special cases, allowing for data-based selection of the baseline alternatives for the latter. Likelihood evaluation is achieved under an Efficient Importance Sampling (EIS) version of the standard GHK algorithm. Several simulation experiments highlight identification, estimation and pretesting within the new class of multinomial multiperiod probit models.  相似文献   

4.
This paper develops a new estimator for the impulse response functions in structural factor models with a fixed number of over-identifying restrictions. The proposed identification scheme nests the conventional just-identified recursive scheme as a special case. We establish the asymptotic distributions of the new estimator and develop test statistics for the over-identifying restrictions. Simulation results show that adding a few more over-identifying restrictions can lead to a substantial improvement in estimation accuracy for impulse response functions at both zero and nonzero horizons. We estimate the effects of a monetary policy shock using a U.S. data set. The results show that our over-identified scheme can help to detect incorrect specifications that lead to spurious impulse responses.  相似文献   

5.
We propose a model of dynamic correlations with a short- and long-run component specification, by extending the idea of component models for volatility. We call this class of models DCC-MIDAS. The key ingredients are the Engle (2002) DCC model, the Engle and Lee (1999) component GARCH model replacing the original DCC dynamics with a component specification and the Engle et al. (2006) GARCH-MIDAS specification that allows us to extract a long-run correlation component via mixed data sampling. We provide a comprehensive econometric analysis of the new class of models, and provide extensive empirical evidence that supports the model’s specification.  相似文献   

6.
In this paper I propose an alternative to calibration of linearized singular dynamic stochastic general equilibrium models. Given an a-theoretical econometric model as a representative of the data generating process, I will construct an information measure which compares the conditional distribution of the econometric model variables with the corresponding singular conditional distribution of the theoretical model variables. The singularity problem will be solved by using convolutions of both distributions with a non-singular distribution. This information measure will then be maximized to the deep parameters of the theoretical model, which links these parameters to the parameters of the econometric model and provides an alternative to calibration. This approach will be illustrated by an application to a linearized version of the stochastic growth model of King, Plosser and Rebelo.  相似文献   

7.
Testing for structural breaks in dynamic factor models   总被引:3,自引:0,他引:3  
In this paper we investigate the consequences of structural breaks in the factor loadings for the specification and estimation of factor models based on principal components and suggest procedures for testing for structural breaks. It is shown that structural breaks severely inflate the number of factors identified by the usual information criteria. The hypothesis of a structural break is tested by using LR, LM and Wald statistics. The LM test (which performs best in our Monte Carlo simulations) is generalized to test for structural breaks in factor models where the break date is unknown and the common factors and idiosyncratic components are serially correlated. The proposed test procedures are applied to datasets from the US and the euro area.  相似文献   

8.
General dynamic factor models have demonstrated their capacity to circumvent the curse of dimensionality in the analysis of high-dimensional time series and have been successfully considered in many economic and financial applications. As second-order models, however, they are sensitive to the presence of outliers—an issue that has not been analyzed so far in the general case of dynamic factors with possibly infinite-dimensional factor spaces (Forni et al. 2000, 2015, 2017). In this paper, we consider this robustness issue and study the impact of additive outliers on the identification, estimation, and forecasting performance of general dynamic factor models. Based on our findings, we propose robust versions of identification, estimation, and forecasting procedures. The finite-sample performance of our methods is evaluated via Monte Carlo experiments and successfully applied to a classical data set of 115 US macroeconomic and financial time series.  相似文献   

9.
Factor modelling of a large time series panel has widely proven useful to reduce its cross-sectional dimensionality. This is done by explaining common co-movements in the panel through the existence of a small number of common components, up to some idiosyncratic behaviour of each individual series. To capture serial correlation in the common components, a dynamic structure is used as in traditional (uni- or multivariate) time series analysis of second order structure, i.e. allowing for infinite-length filtering of the factors via dynamic loadings. In this paper, motivated from economic data observed over long time periods which show smooth transitions over time in their covariance structure, we allow the dynamic structure of the factor model to be non-stationary over time by proposing a deterministic time variation of its loadings. In this respect we generalize the existing recent work on static factor models with time-varying loadings as well as the classical, i.e. stationary, dynamic approximate factor model. Motivated from the stationary case, we estimate the common components of our dynamic factor model by the eigenvectors of a consistent estimator of the now time-varying spectral density matrix of the underlying data-generating process. This can be seen as a time-varying principal components approach in the frequency domain. We derive consistency of this estimator in a “double-asymptotic” framework of both cross-section and time dimension tending to infinity. The performance of the estimators is illustrated by a simulation study and an application to a macroeconomic data set.  相似文献   

10.
A new empirical reduced-form model for credit rating transitions is introduced. It is a parametric intensity-based duration model with multiple states and driven by exogenous covariates and latent dynamic factors. The model has a generalized semi-Markov structure designed to accommodate many of the stylized facts of credit rating migrations. Parameter estimation is based on Monte Carlo maximum likelihood methods for which the details are discussed in this paper. A simulation experiment is carried out to show the effectiveness of the estimation procedure. An empirical application is presented for transitions in a 7 grade rating system. The model includes a common dynamic component that can be interpreted as the credit cycle. Asymmetric effects of this cycle across rating grades and additional semi-Markov dynamics are found to be statistically significant. Finally, we investigate whether the common factor model suffices to capture systematic risk in rating transition data by introducing multiple factors in the model.  相似文献   

11.
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multi-objective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since it has the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).  相似文献   

12.
High dimensional factor models can involve thousands of parameters. The Jacobian matrix for identification is of a large dimension. It can be difficult and numerically inaccurate to evaluate the rank of such a Jacobian matrix. We reduce the identification problem to a small rank problem, which is easy to check. The identification conditions allow both linear and nonlinear restrictions. Under reasonable assumptions for high dimensional factor models, the small rank conditions are shown to be necessary and sufficient for local identification.  相似文献   

13.
We use a bivariate generalized autoregressive conditionally heteroskedastic (GARCH) model of inflation and output growth to examine the causality relationship among nominal uncertainty, real uncertainty and macroeconomic performance measured by the inflation and output growth rates. The application of the constant conditional correlation GARCH(1,1) model leads to a number of interesting conclusions. First, inflation does cause negative welfare effects, both directly and indirectly, i.e. via the inflation uncertainty channel. Secondly, in some countries, more inflation uncertainty provides an incentive to Central Banks to surprise the public by raising inflation unexpectedly. Thirdly, in contrast to the assumptions of some macroeconomic models, business cycle variability and the rate of economic growth are related. More variability in the business cycle leads to more output growth.  相似文献   

14.
The concept of causality introduced by Wiener [Wiener, N., 1956. The theory of prediction, In: E.F. Beckenback, ed., The Theory of Prediction, McGraw-Hill, New York (Chapter 8)] and Granger [Granger, C. W.J., 1969. Investigating causal relations by econometric models and cross-spectral methods, Econometrica 37, 424–459] is defined in terms of predictability one period ahead. This concept can be generalized by considering causality at any given horizon hh as well as tests for the corresponding non-causality [Dufour, J.-M., Renault, E., 1998. Short-run and long-run causality in time series: Theory. Econometrica 66, 1099–1125; Dufour, J.-M., Pelletier, D., Renault, É., 2006. Short run and long run causality in time series: Inference, Journal of Econometrics 132 (2), 337–362]. Instead of tests for non-causality at a given horizon, we study the problem of measuring causality between two vector processes. Existing causality measures have been defined only for the horizon 1, and they fail to capture indirect causality. We propose generalizations to any horizon hh of the measures introduced by Geweke [Geweke, J., 1982. Measurement of linear dependence and feedback between multiple time series. Journal of the American Statistical Association 77, 304–313]. Nonparametric and parametric measures of unidirectional causality and instantaneous effects are considered. On noting that the causality measures typically involve complex functions of model parameters in VAR and VARMA models, we propose a simple simulation-based method to evaluate these measures for any VARMA model. We also describe asymptotically valid nonparametric confidence intervals, based on a bootstrap technique. Finally, the proposed measures are applied to study causality relations at different horizons between macroeconomic, monetary and financial variables in the US.  相似文献   

15.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

16.
We propose methods for testing hypothesis of non-causality at various horizons, as defined in Dufour and Renault (Econometrica 66, (1998) 1099–1125). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the US economy.  相似文献   

17.
18.
We develop an omnibus specification test for multivariate continuous-time models using the conditional characteristic function, which often has a convenient closed-form or can be accurately approximated for many multivariate continuous-time models in finance and economics. The proposed test fully exploits the information in the joint conditional distribution of underlying economic processes and hence is expected to have good power in a multivariate context. A class of easy-to-interpret diagnostic procedures is supplemented to gauge possible sources of model misspecification. Our tests are also applicable to discrete-time distribution models. Simulation studies show that the tests provide reliable inference in finite samples.  相似文献   

19.
We consider the problem of derivative pricing when the stochastic discount factors are exponential-affine functions of underlying state variable. In particular we discuss the conditionally Gaussian framework and introduce semi-parametric pricing methods for models with path dependent drift and volatility. This approach is also applied to more complicated frameworks, such as pricing of a derivative written on an index, when the interest rate is stochastic.  相似文献   

20.
During the past two decades, innovations protected by patents have played a key role in business strategies. This fact enhanced studies of the determinants of patents and the impact of patents on innovation and competitive advantage. Sustaining competitive advantages is as important as creating them. Patents help sustaining competitive advantages by increasing the production cost of competitors, by signaling a better quality of products and by serving as barriers to entry. If patents are rewards for innovation, more R&D should be reflected in more patent applications but this is not the end of the story. There is empirical evidence showing that patents through time are becoming easier to get and more valuable to the firm due to increasing damage awards from infringers. These facts question the constant and static nature of the relationship between R&D and patents. Furthermore, innovation creates important knowledge spillovers due to its imperfect appropriability. Our paper investigates these dynamic effects using US patent data from 1979 to 2000 with alternative model specifications for patent counts. We introduce a general dynamic count panel data model with dynamic observable and unobservable spillovers, which encompasses previous models, is able to control for the endogeneity of R&D and therefore can be consistently estimated by maximum likelihood. Apart from allowing for firm specific fixed and random effects, we introduce a common unobserved component, or secret stock of knowledge, that affects differently the propensity to patent of each firm across sectors due to their different absorptive capacity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号