首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently De Luca and Carfora (Statistica e Applicazioni 8:123–134, 2010) have proposed a novel model for binary time series, the Binomial Heterogenous Autoregressive (BHAR) model, successfully applied for the analysis of the quarterly binary time series of U.S. recessions. In this work we want to measure the efficacy of the out-of-sample forecasts of the BHAR model compared to the probit models by Kauppi and Saikkonen (Rev Econ Stat 90:777–791, 2008). Given the substantial indifference of the predictive accuracy between the BHAR and the probit models, a combination of forecasts using the method proposed by Bates and Granger (Oper Res Q 20:451–468, 1969) for probability forecasts is analyzed. We show how the forecasts obtained by the combination between the BHAR model and each of the probit models are superior compared to the forecasts obtained by each single model.  相似文献   

2.
This article uses a small set of variables – real GDP, the inflation rate and the short-term interest rate – and a rich set of models – atheoretical (time series) and theoretical (structural), linear and nonlinear, as well as classical and Bayesian models – to consider whether we could have predicted the recent downturn of the US real GDP. Comparing the performance of the models to the benchmark random-walk model by root mean-square errors, the two structural (theoretical) models, especially the nonlinear model, perform well on average across all forecast horizons in our ex post, out-of-sample forecasts, although at specific forecast horizons certain nonlinear atheoretical models perform the best. The nonlinear theoretical model also dominates in our ex ante, out-of-sample forecast of the Great Recession, suggesting that developing forward-looking, microfounded, nonlinear, dynamic stochastic general equilibrium models of the economy may prove crucial in forecasting turning points.  相似文献   

3.
We employ a 10-variable dynamic structural general equilibrium model to forecast the US real house price index as well as its downturn in 2006:Q2. We also examine various Bayesian and classical time-series models in our forecasting exercise to compare to the dynamic stochastic general equilibrium model, estimated using Bayesian methods. In addition to standard vector-autoregressive and Bayesian vector autoregressive models, we also include the information content of either 10 or 120 quarterly series in some models to capture the influence of fundamentals. We consider two approaches for including information from large data sets — extracting common factors (principle components) in factor-augmented vector autoregressive or Bayesian factor-augmented vector autoregressive models as well as Bayesian shrinkage in a large-scale Bayesian vector autoregressive model. We compare the out-of-sample forecast performance of the alternative models, using the average root mean squared error for the forecasts. We find that the small-scale Bayesian-shrinkage model (10 variables) outperforms the other models, including the large-scale Bayesian-shrinkage model (120 variables). In addition, when we use simple average forecast combinations, the combination forecast using the 10 best atheoretical models produces the minimum RMSEs compared to each of the individual models, followed closely by the combination forecast using the 10 atheoretical models and the DSGE model. Finally, we use each model to forecast the downturn point in 2006:Q2, using the estimated model through 2005:Q2. Only the dynamic stochastic general equilibrium model actually forecasts a downturn with any accuracy, suggesting that forward-looking microfounded dynamic stochastic general equilibrium models of the housing market may prove crucial in forecasting turning points.  相似文献   

4.
This paper extends probit recession forecasting models by incorporating various recession risk factors and using the advanced dynamic probit modeling approaches. The proposed risk factors include financial market expectations of a gloomy economic outlook, credit or liquidity risks in the general economy, the risks of negative wealth effects resulting from the bursting of asset price bubbles, and signs of deteriorating macroeconomic fundamentals. The model specifications include three different dynamic probit models and the standard static model. The out-of-sample analysis suggests that the four probit models with the proposed risk factors can generate more accurate forecasts for the duration of recessions than the conventional static models with only yield spread and equity price index as the predictors. Among the four probit models, the dynamic and dynamic autoregressive probit models outperform the static and autoregressive models in terms of predicting the recession duration. With respect to forecasting the business cycle turning points, the static probit model is as good as the dynamic probit models by being able to flag an early warning signal of a recession.  相似文献   

5.
Abstract This paper examines the ability of various financial and macroeconomic variables to forecast Canadian recessions. It evaluates four model specifications, including the advanced dynamic, autoregressive, dynamic autoregressive probit models as well as the conventional static probit model. The empirical results highlight several significant recession predictors, notably the government bond yield spread, growth rates of the housing starts, the real money supply and the composite index of leading indicators. Both the in‐sample and out‐of‐sample results suggest that the forecasting performance of the four probit models is mixed. The dynamic and dynamic autoregressive probit models are better in predicting the duration of recessions while the static and autoregressive probit models are better in forecasting the peaks of business cycles. Hence, the advanced dynamic models and the conventional static probit model can complement one another to provide more accurate forecasts for the duration and turning points of business cycles.  相似文献   

6.
This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecasts associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log scores, quadratic scores and Epstein scores are used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast US Federal Open Market Committee (FOMC) decisions regarding changes in the federal funds target rate. Several of the economic fundamentals influencing the FOMC’s decisions are integrated, or I(1), and are modeled in a similar fashion to Hu and Phillips (J Appl Econom 19(7):851– 867, 2004). The empirical results show that combining forecasted probabilities using scores generally outperforms both equal weight combination and forecasts based on multivariate models.  相似文献   

7.
This paper assesses the R&D performance of nascent and established technology-based small firms that receive a Phase II R&D award from the U.S. Small Business Innovation Research (SBIR) program. Our empirical analysis is based on a two-stage selection probit model, which is used to estimate the probability of commercialization conditional on the Phase II project having not failed. Our model predicts, and our analysis confirms, that nascent firms are more likely to fail in their SBIR-supported R&D endeavors. Further, we find that nascent firms that do not fail have a higher probability of commercializing their developed technology.  相似文献   

8.
In this paper we examine which macroeconomic and financial variables have most predictive ability for the federal funds target rate decisions made by the Federal Open Market Committee (FOMC). We conduct the analysis for the 157 FOMC decisions during the period January 1990–June 2008, using dynamic ordered probit models with a Bayesian endogenous variable selection methodology and real-time data for a set of 33 candidate predictor variables. We find that indicators of economic activity and forward-looking term structure variables, as well as survey measures are most informative from a forecasting perspective. For the full sample period, in-sample probability forecasts achieve a hit rate of 90%. Based on out-of-sample forecasts for the period January 2001–June 2008, 82% of the FOMC decisions are predicted correctly.  相似文献   

9.
We study the directional predictability of monthly excess stock market returns in the U.S. and ten other markets using univariate and bivariate binary response models. We introduce a new bivariate (two-equation) probit model that allows us to examine the benefits of predicting the signs of returns jointly, focusing on the predictive power originating from the U.S. to foreign markets. Our in-sample and out-of-sample forecasting results indicate superior predictive performance of the new model over competing univariate binary response models, and conventional predictive regressions, by statistical measures and market timing performance. This highlights the importance of predictive information from the U.S. to the other markets providing also practical improvement in investors' market timing decisions.  相似文献   

10.
We propose to produce accurate point and interval forecasts of exchange rates by combining a number of well known fundamental based panel models. Combination of each model utilizes a set of weights computed using a linear mixture of experts's framework, where weights are determined by log scores assigned to each model's predictive performance. As well as model uncertainty, we take potential structural break in the parameters of the models into consideration. In our application, to quarterly data for ten currencies (including the Euro) for the period 1990q1–2008q4, we show that the forecasts from ensemble models produce mean and interval forecasts that outperform equal weight, and to a lesser extent random walk benchmark models. The gain from combining forecasts is particularly pronounced for longer-horizon forecasts for central forecasts, but much less so for interval forecasts. Calculations of the probability of the exchange rate rising or falling using the combined or ensemble model show a good correspondence with known events and potentially provide a useful measure for uncertainty of whether the exchange rate is likely to rise or fall.  相似文献   

11.
This paper investigates the forecasting performance of the diffusion index approach for the Australian economy, and considers the forecasting performance of the diffusion index approach relative to composite forecasts. Weighted and unweighted factor forecasts are benchmarked against composite forecasts, and forecasts derived from individual forecasting models. The results suggest that diffusion index forecasts tend to improve on the benchmark AR forecasts. We also observe that weighted factors tend to produce better forecasts than their unweighted counterparts. We find, however, that the size of the forecasting improvement is less marked than previous research, with the diffusion index forecasts typically producing mean square errors of a similar magnitude to the VAR and BVAR approaches.  相似文献   

12.
This article evaluates various models’ predictive power for U.S. inflation rate using a simulated out-of-sample forecasting framework. The starting point is the traditional unemployment Phillips curve. We show that a factor Phillips curve model is superior to the traditional Phillips curve, and its performance is comparable to other factor models. We find that a factor AR model is superior to the factor Phillips curve model, and is the best bivariate or factor model at longer horizons. Finally, we investigate a New Keynesian Phillips curve model, and find that its forecasting performance dominates all other models at the longer horizons.  相似文献   

13.
This study evaluates the effects of the North American Free Trade Agreement (NAFTA) on bilateral trade between the United States and Canada and between the United States and Mexico. Trade flow estimates are from a vector autoregression (VAR) model. The VAR methodology allows modeling bilateral trade in a flexible manner that incorporates both the interaction between different variables and the dynamics of trade, output, prices, and the exchange rate. After testing the outside sample forecasting ability of the models, the study produces dynamic forecasts of bilateral trade. It then compares forecasts incorporating the effects of the NAFTA with baseline forecasts. The results suggest expanded trade for all three countries and an improvement in the U.S. trade position with both Canada and Mexico.  相似文献   

14.
This study determines whether the global vector autoregressive (GVAR) approach provides better forecasts of key South African variables than a vector error correction model (VECM) and a Bayesian vector autoregressive (BVAR) model augmented with foreign variables. The article considers both a small GVAR model and a large GVAR model in determining the most appropriate model for forecasting South African variables. We compare the recursive out-of-sample forecasts for South African GDP and inflation from six types of models: a general 33 country (large) GVAR, a customized small GVAR for South Africa, a VECM for South Africa with weakly exogenous foreign variables, a BVAR model, autoregressive (AR) models and random walk models. The results show that the forecast performance of the large GVAR is generally superior to the performance of the customized small GVAR for South Africa. The forecasts of both the GVAR models tend to be better than the forecasts of the augmented VECM, especially at longer forecast horizons. Importantly, however, on average, the BVAR model performs the best when it comes to forecasting output, while the AR(1) model outperforms all the other models in predicting inflation. We also conduct ex ante forecasts from the BVAR and AR(1) models over 2010:Q1–2013:Q4 to highlight their ability to track turning points in output and inflation, respectively.  相似文献   

15.
It has been well documented that the consensus forecast from surveys of professional forecasters shows a bias that varies over time. In this paper, we examine whether this bias may be due to forecasters having an asymmetric loss function. In contrast to previous research, we account for the time variation in the bias by making the loss function depend on the state of the economy. The asymmetry parameter in the loss function is specified to depend on set state variables which may cause forecaster to intentionally bias their forecasts. We consider both the Lin–Ex and asymmetric power loss functions. For the commonly used Lin–Ex and Lin–Lin loss functions, we show the model can be easily estimated by least squares. We apply our methodology to the consensus forecast of real U.S. GDP growth from the Survey of Professional Forecasters. We find that forecast uncertainty has an asymmetric effect on the asymmetry parameter in the loss function dependent upon whether the economy is in expansion or contraction. When the economy is in expansion, forecaster uncertainty is related to an overprediction in the median forecast of real GDP growth. In contrast, when the economy is in contraction, forecaster uncertainty is related to an underprediction in the median forecast of real GDP growth. Our results are robust to the particular loss function that is employed in the analysis.  相似文献   

16.
The purpose of this paper is to provide a complete evaluation of four regime-switching models by checking their performance in detecting US business cycle turning points, in replicating US business cycle features and in forecasting US GDP growth rate. Both individual and combined forecasts are considered. Results indicate that while the Markov-switching model succeeded in replicating all the NBER peak and trough dates without an extra-cycle detection, it seems to be outperformed by the Bounce-back model in term of the delay time to a correct alarm. Concerning business cycle features characterization, none of the competing models dominates over all the features. The performance of the Markov-switching and bounce back models in detecting turning points was not translated into an improved business cycle feature characterization since they are outperformed by the Floor and Ceiling model. The forecast performance of the considered models varies across regimes and across forecast horizons. That is, the model performing best in an expansion period is not necessarily the same in a recession period and similarly for the forecast horizons. Finally, combining such individual forecasts generally leads to increased forecast accuracy especially for h=1.  相似文献   

17.
A Guide To U.S. Chain Aggregated Nipa Data   总被引:1,自引:0,他引:1  
In 1996, the U.S. Department of Commerce began using a new method to construct all aggregate "real" series in the National Income and Product Accounts (NIPA). This method is based on the so-called "ideal chain index" pioneered by Irving Fisher. The new methodology has some extremely important implications that are unfamiliar to many practicing empirical economists; as a result, mistaken calculations with NIPA data have become very common. This paper explains the motivation for the switch to chain aggregation, and then illustrates the usage of chain–aggregated data with three topical examples, each relating to a different aspect of how information technologies are changing the U.S. economy.  相似文献   

18.
In the aftermath of the recent financial crisis, a variety of structural vector autoregression (VAR) models have been proposed to identify credit supply shocks. Using a Monte Carlo experiment, we show that the performance of these models can vary substantially, with some identification schemes producing particularly misleading results. When applied to U.S. data, the estimates from the best performing VAR models indicate, on average, that credit supply shocks that raise spreads by 10 basis points reduce GDP growth and inflation by 1% after one year. These shocks were important during the Great Recession, accounting for about half the decline in GDP growth.  相似文献   

19.
This paper proposes a simple HAR-RV-based model to predict return jumps through a conditional density of jump size with time-varying moments. We model jump occurrences based on a version of the autoregressive conditional hazard model that relies on past continuous realized volatilities. Applying our methodology to seven equity indices on the U.S. and Chinese stock markets, we reach the following key findings: (i) jump occurrence and size are dependent on past realized volatility, (ii) the proposed model yields superior in- and out-of-sample jump size density forecasts compared to an ARMA(1,1)-GARCH(1,1) model, (iii) and the occurrence and sign of return jumps are predictable to some extent.  相似文献   

20.
Using a unique facility-level dataset from Michigan, we examine the effect of environmental auditing on manufacturing facilities’ long-term compliance with U.S. hazardous waste regulations. We also investigate the factors that affect facilities’ decisions to conduct environmental audits and whether auditing in turn affects the probability of regulatory inspections. We account for the potential endogeneity of our audit measure and the censoring of our compliance measure using a censored trivariate probit, which we estimate using simulated maximum likelihood. We find that larger facilities and those subject to more stringent regulations are more likely to audit; facilities with poor compliance records are less likely to audit. However, we find no significant long-run impact of auditing on the probability of a regulatory inspection or compliance among these Michigan manufacturing facilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号