首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In order to perform real-time business cycle inferences and forecasts of GDP growth rates in the euro area, we use an extension of the Markov-switching dynamic factor models that accounts for the features of the day-to-day monitoring of economic developments, such as ragged edges, mixed frequencies and data revisions. We provide examples that show the nonlinear nature of the relationships between data revisions, point forecasts and forecast uncertainty. Based on our empirical results, we think that the real-time probabilities of recession inferred from the model are an appropriate statistic for capturing what the press call green shoots, and for monitoring double-dip recessions.  相似文献   

2.
The proposed panel Markov‐switching VAR model accommodates changes in low and high data frequencies and incorporates endogenous time‐varying transition matrices of country‐specific Markov chains, allowing for interconnections. An efficient multi‐move sampling algorithm draws time‐varying Markov‐switching chains. Using industrial production growth and credit spread data, several important data features are obtained. Three regimes appear, with slow growth becoming persistent in the eurozone. Turning point analysis indicates the USA leading the eurozone cycle. Amplification effects influence recession probabilities for Eurozone countries. A credit shock results in temporary negative industrial production growth in Germany, Spain and the USA. Core and peripheral countries exist in the eurozone. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
This paper examines global recessions as a cascade phenomenon. In other words, how recessions arising within one or more countries might percolate across a network of connected economies. An agent based model is set up in which the agents are Western economies. A country has a probability of entering recession in any given year and one of emerging from it the next. In addition, the agents have a threshold propensity, which varies across time, to import a recession from the agents most closely connected to them. The agents are connected on a network, and an agent’s neighbours at any time are either in (state 1) or out (state 0) of recession. If the weighted sum exceeds the threshold, the agent also goes into recession. Annual real GDP growth for 17 Western countries 1871–2006 is used as the data set. The model is able to replicate three key features of the statistical distribution of recessions: the distribution of the number of countries in recession in any given year, the duration of recessions within the individual countries, and the distribution of ‘wait time’ between recessions i.e. the number of years between them. The network structure is important for the interacting agents to replicate the stylised facts. The country-specific probabilities of entering and emerging from recession by themselves give results which are by no means as well matched to the actual data. We are grateful to an anonymous referee for some extremely helpful comments.  相似文献   

4.
This paper incorporates vintage differences and forecasts into the Markov switching models described by Hamilton (1994). The vintage differences and forecasts induce parameter breaks close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. A supplementary procedure estimates the statistical properties of the end-of-sample observations that behave differently from the rest, allowing inferred probabilities to reflect the breaks. Empirical results using real-time data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession.  相似文献   

5.
A Bayesian hierarchical mixed model is developed for multiple comparisons under a simple order restriction. The model facilitates inferences on the successive differences of the population means, for which we choose independent prior distributions that are mixtures of an exponential distribution and a discrete distribution with its entire mass at zero. We employ Markov Chain Monte Carlo (MCMC) techniques to obtain parameter estimates and estimates of the posterior probabilities that any two of the means are equal. The latter estimates allow one both to determine if any two means are significantly different and to test the homogeneity of all of the means. We investigate the performance of the model-based inferences with simulated data sets, focusing on parameter estimation and successive-mean comparisons using posterior probabilities. We then illustrate the utility of the model in an application based on data from a study designed to reduce lead blood concentrations in children with elevated levels. Our results show that the proposed hierarchical model can effectively unify parameter estimation, tests of hypotheses and multiple comparisons in one setting.  相似文献   

6.
This paper presents a model of choice with limited attention. The decision-maker forms a consideration set, from which she chooses her most preferred alternative. Both preferences and consideration sets are stochastic. While we present axiomatisations for this model, our focus is on the following identification question: to what extent can an observer retrieve probabilities of preferences and consideration sets from observed choices? Our first conclusion is a negative one: if the observed data are choice probabilities, then probabilities of preferences and consideration sets cannot be retrieved from choice probabilities. We solve the identification problem by assuming that an “enriched” dataset is observed, which includes choice probabilities under two frames. Given this dataset, the model is “fully identified”, in the sense that we can recover from observed choices (i) the probabilities of preferences (to the same extent as in models with full attention) and (ii) the probabilities of consideration sets. While a number of recent papers have developed models of limited attention that are, in a similar sense, “fully identified”, they obtain this result not by using an enriched dataset but rather by making a restrictive assumption about the default option, which our paper avoids.  相似文献   

7.
We estimate a Markow-switching dynamic factor model with three states based on six leading business cycle indicators for Germany, preselected from a broader set using the elastic net soft-thresholding rule. The three states represent expansions, normal recessions and severe recessions. We show that a two-state model is not sensitive enough to detect relatively mild recessions reliably when the Great Recession of 2008/2009 is included in the sample. Adding a third state helps to distinguish normal and severe recessions clearly, so that the model identifies all business cycle turning points in our sample reliably. In a real-time exercise, the model detects recessions in a timely manner. Combining the estimated factor and the recession probabilities with a simple GDP forecasting model yields an accurate nowcast for the steepest decline in GDP in 2009Q1, and a correct prediction of the timing of the Great Recession and its recovery one quarter in advance.  相似文献   

8.
In this study we focus attention on model selection in the presence of panel data. Our approach is eclectic in that it combines both classical and Bayesian techniques. It is also novel in that we address not only model selection, but also model occurrence, i.e., the process by which ‘nature’ chooses a statistical framework in which to generate the data of interest. For a given data subset, there exist competing models each of which have an ex ante positive probability of being the correct model, but for any one generated sample, ex post exactly one such model is the basis for the observed data set. Attention focuses on how the underlying model occurrence probabilities of the competing models depend on characteristics of the environments in which the data subsets are generated. Classical, Bayesian, and mixed estimation approaches are developed. Bayesian approaches are shown to be especially attractive whenever the models are nested.  相似文献   

9.
金融状况指数(FCI)综合了多个金融变量的信息,能够比单个指标更全面、直观地反映金融市场的整体运行情况。基于动态因子模型直接利用混频数据测算我国的实时FCI,突破了传统方法要求数据频度一致的限制,从而大大增强了FCI的时效性,使FCI具备金融市场流动性指示器的功能。实证研究表明,改进后的FCI能够揭示近年来我国货币政策的松紧程度,其走势比CPI月同比领先7个月,对通胀的预测效果优于其构成变量。  相似文献   

10.
We consider different methods for combining probability forecasts. In empirical exercises, the data generating process of the forecasts and the event being forecast is not known, and therefore the optimal form of combination will also be unknown. We consider the properties of various combination schemes for a number of plausible data generating processes, and indicate which types of combinations are likely to be useful. We also show that whether forecast encompassing is found to hold between two rival sets of forecasts or not may depend on the type of combination adopted. The relative performances of the different combination methods are illustrated, with an application to predicting recession probabilities using leading indicators.  相似文献   

11.
In this paper we consider the issue of unit root testing in cross-sectionally dependent panels. We consider panels that may be characterized by various forms of cross-sectional dependence including (but not exclusive to) the popular common factor framework. We consider block bootstrap versions of the group-mean (Im et al., 2003) and the pooled (Levin et al., 2002) unit root coefficient DF tests for panel data, originally proposed for a setting of no cross-sectional dependence beyond a common time effect. The tests, suited for testing for unit roots in the observed data, can be easily implemented as no specification or estimation of the dependence structure is required. Asymptotic properties of the tests are derived for T going to infinity and N finite. Asymptotic validity of the bootstrap tests is established in very general settings, including the presence of common factors and cointegration across units. Properties under the alternative hypothesis are also considered. In a Monte Carlo simulation, the bootstrap tests are found to have rejection frequencies that are much closer to nominal size than the rejection frequencies for the corresponding asymptotic tests. The power properties of the bootstrap tests appear to be similar to those of the asymptotic tests.  相似文献   

12.
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970’s, the beginning of the 1980’s and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990’s and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.  相似文献   

13.
This paper empirically studies the predictive model of business failure using the sample of listed companies that went bankrupt during the period from 1997 to 1998 when deep recession driven by the IMF crisis started in Korea. Logit maximum likelihood estimator is employed as the statistical technique. The model demonstrated decent prediction accuracy and robustness. The type I accuracy is 80.4 per cent and the Type II accuracy is 73.9 per cent. The accuracy remains almost at the same level when the model is applied to an independent holdout sample. In addition to building a bankruptcy prediction model this paper finds that most of firms that went bankrupt during the Korean economic crisis from 1997 to 1998 had shown signs of financial distress long before the crisis. Bankruptcy probabilities of the sample are consistently high during the period from 1991 to 1996. The evidence of this paper can be seen as complementary to the perspective that traces Asian economic crisis to the vulnerabilities of corporate governance of Asian countries.  相似文献   

14.
Panel and life-course data are ideally suited to unravelling labour market dynamics, but their designs differ, with potential consequences for the estimated relationships. To gauge the extent to which these two data designs produce dissimilar transition rates and the causation thereof, we use the German Life History Study and the German Socio-Economic Panel. Life-course data in particular suffer from recall effects due to memory bias causing understated transition probabilities. Panel data suffer from seam effects due to spurious transitions between statuses recalled in activity calendars that generate heaps at particular time points and cause overstated transition probabilities. We combine the two datasets and estimate multilevel (multistate) discrete-time models for event history data to model transitions between labour market states taking these factors into account. Though we find much lower transition rates in the life-course study, confirming the results of Solga (Qual Quant 35:291–309, 2001) in this Journal for East-Germany, part of the difference can be explained by short spells recall bias. The estimated models on exit, re-entry and job mobility on the combined datasets show indeed a negative retrospective design effect. Another specification that includes the length of the recall period shows no significant decrease in the transition probabilities with increasing length, suggesting that the negative design effect is due to other design differences.  相似文献   

15.
A common problem in data analysis occurs when one has many models to compare to a single or just a few data sets. For example, a researcher may conduct an experiment in which subjects respond by choosing one category from a small set of categories. The data set then consists of the frequencies with which the categories occur. Many substantive models may yield predictions of these frequencies, so that the researcher is faced with the problem of comparing the data to many a priori equally attractive theoretical predictions. This paper proposes a method for the simultaneous study of the predictions and data. The method improves on the standard approach to judging goodness-of-fit by treating the predictions as rows in a two (or higher) way contingency table. Log linear models for the probabilities that subjects respond in specific ways are used to determine how the predictions compare to the data and to rank the predictions in terms of their accuracy.  相似文献   

16.
We demonstrate the use of a Naïve Bayes model as a recession forecasting tool. The approach is closely connected with Markov-switching models and logistic regression, but also has important differences. In contrast to Markov-switching models, our Naïve Bayes model treats National Bureau of Economic Research business cycle turning points as data, rather than as hidden states to be inferred by the model. Although Naïve Bayes and logistic regression are asymptotically equivalent under certain distributional assumptions, the assumptions do not hold for business cycle data. As a result, Naïve Bayes has a larger asymptotic error rate, but converges to the error rate more quickly than logistic regression, resulting in more accurate recession forecasts with limited data. We show that Naïve Bayes outperforms competing models and the Survey of Professional Forecasters consistently for real-time recession forecasting up to 12 months in advance. These results hold under standard error measures, and also under a novel measure that varies the penalty on false signals, depending on when they occur within a cycle; for example, a false signal in the middle of an expansion is penalized more heavily than one that occurs close to a turning point.  相似文献   

17.
This paper studies the role of the Federal Reserve's policy in the recent boom and bust of the housing market, and in the ensuing recession. By estimating a structural dynamic factor model on a panel of 109 US quarterly variables from 1982 to 2010, we find that, although the Federal Reserve's policy between 2002 and 2004 was slightly expansionary, its contribution to the recent housing cycle was negligible. We also show that a more restrictive policy would have smoothed the cycle but not prevented the recession. We thus find no role for the Federal Reserve in causing the recession. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Several empirical studies have documented that the signs of excess stock returns are, to some extent, predictable. In this paper, we consider the predictive ability of the binary dependent dynamic probit model in predicting the direction of monthly excess stock returns. The recession forecast obtained from the model for a binary recession indicator appears to be the most useful predictive variable, and once it is employed, the sign of the excess return is predictable in-sample. The new dynamic “error correction” probit model proposed in the paper yields better out-of-sample sign forecasts, with the resulting average trading returns being higher than those of either the buy-and-hold strategy or trading rules based on ARMAX models.  相似文献   

19.
Markov chain Monte Carlo methods are frequently used in the analyses of genetic data on pedigrees for the estimation of probabilities and likelihoods which cannot be calculated by existing exact methods. In the case of discrete data, the underlying Markov chain may be reducible and care must be taken to ensure that reliable estimates are obtained. Potential reducibility thus has implications for the analysis of the mixed inheritance model, for example, where genetic variation is assumed to be due to one single locus of large effect and many loci each with a small effect. Similarly, reducibility arises in the detection of quantitative trait loci from incomplete discrete marker data. This paper aims to describe the estimation problem in terms of simple discrete genetic models and the single-site Gibbs sampler. Reducibility of the Gibbs sampler is discussed and some current methods for circumventing the problem outlined.  相似文献   

20.
For a large heterogeneous group of patients, we analyse probabilities of hospital admission and distributional properties of lengths of hospital stay conditional on individual determinants. Bayesian structured additive regression models for zero‐inflated and overdispersed count data are employed. In addition, the framework is extended towards hurdle specifications, providing an alternative approach to cover particularly large frequencies of zero quotes in count data. As a specific merit, the model class considered embeds linear and nonlinear effects of covariates on all distribution parameters. Linear effects indicate that the quantity and severity of prior illness are positively correlated with the risk of hospital admission, while medical prevention (in the form of general practice visits) and rehabilitation reduce the expected length of future hospital stays. Flexible nonlinear response patterns are diagnosed for age and an indicator of a patients' socioeconomic status. We find that social deprivation exhibits a positive impact on the risk of admission and a negative effect on the expected length of future hospital stays of admitted patients. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号