首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider univariate low‐frequency filters applicable in real‐time as a macroeconomic forecasting method. This amounts to targeting only low frequency fluctuations of the time series of interest. We show through simulations that such approach is warranted and, using US data, we confirm empirically that consistent gains in forecast accuracy can be obtained in comparison with a variety of other methods. There is an inherent arbitrariness in the choice of the cut‐off defining low and high frequencies, which calls for a careful characterization of the implied optimal (for forecasting) degree of smoothing of the key macroeconomic indicators we analyse. We document interesting patterns that emerge: for most variables the optimal choice amounts to disregarding fluctuations well below the standard business cycle cut‐off of 32 quarters while generally increasing with the forecast horizon; for inflation and variables related to housing this cut‐off lies around 32 quarters for all horizons, which is below the optimal level for federal government spending.  相似文献   

2.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

3.
Using euro‐area data, we re‐examine the empirical success of New‐Keynesian Phillips curves (NKPCs). We re‐estimate with a suitably specified optimizing supply side (which attempts to treat non‐stationarity in factor income shares and mark‐ups) that allows us to derive estimates of technology parameters, marginal costs and ‘price gaps’. Our resulting estimates of the euro‐area NKPCs are robust, provide reasonable estimates for fixed‐price durations and discount rates and embody plausible dynamic properties. Our method for identifying the underlying determinants of NKPCs has general applicability to a wide set of countries as well as of use for sectoral studies.  相似文献   

4.
This paper compares alternative models of time‐varying volatility on the basis of the accuracy of real‐time point and density forecasts of key macroeconomic time series for the USA. We consider Bayesian autoregressive and vector autoregressive models that incorporate some form of time‐varying volatility, precisely random walk stochastic volatility, stochastic volatility following a stationary AR process, stochastic volatility coupled with fat tails, GARCH and mixture of innovation models. The results show that the AR and VAR specifications with conventional stochastic volatility dominate other volatility specifications, in terms of point forecasting to some degree and density forecasting to a greater degree. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Combined density nowcasts for quarterly Euro‐area GDP growth are produced based on the real‐time performance of component models. Components are distinguished by their use of ‘hard’ and ‘soft’, aggregate and disaggregate, indicators. We consider the accuracy of the density nowcasts as within‐quarter indicator data accumulate. We find that the relative utility of ‘soft’ indicators surged during the recession. But as this instability was hard to detect in real‐time it helps, when producing density nowcasts unknowing any within‐quarter ‘hard’ data, to weight the different indicators equally. On receipt of ‘hard’ data for the second month in the quarter better calibrated densities are obtained by giving a higher weight in the combination to ‘hard’ indicators.  相似文献   

6.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

7.
8.
In this article, we merge two strands from the recent econometric literature. First, factor models based on large sets of macroeconomic variables for forecasting, which have generally proven useful for forecasting. However, there is some disagreement in the literature as to the appropriate method. Second, forecast methods based on mixed‐frequency data sampling (MIDAS). This regression technique can take into account unbalanced datasets that emerge from publication lags of high‐ and low‐frequency indicators, a problem practitioner have to cope with in real time. In this article, we introduce Factor MIDAS, an approach for nowcasting and forecasting low‐frequency variables like gross domestic product (GDP) exploiting information in a large set of higher‐frequency indicators. We consider three alternative MIDAS approaches (basic, smoothed and unrestricted) that provide harmonized projection methods that allow for a comparison of the alternative factor estimation methods with respect to nowcasting and forecasting. Common to all the factor estimation methods employed here is that they can handle unbalanced datasets, as typically faced in real‐time forecast applications owing to publication lags. In particular, we focus on variants of static and dynamic principal components as well as Kalman filter estimates in state‐space factor models. As an empirical illustration of the technique, we use a large monthly dataset of the German economy to nowcast and forecast quarterly GDP growth. We find that the factor estimation methods do not differ substantially, whereas the most parsimonious MIDAS projection performs best overall. Finally, quarterly models are in general outperformed by the Factor MIDAS models, which confirms the usefulness of the mixed‐frequency techniques that can exploit timely information from business cycle indicators.  相似文献   

9.
Using a large panel of mainly unquoted euro‐area firms over the period 2003–2011, this paper examines the impact of financial pressure on firms’ employment. The analysis finds evidence that financial pressure negatively affects firms’ employment decisions. This effect is stronger during the euro area‐crisis (2010–2011), especially for firms in the periphery compared to their counterparts in non‐periphery European economies. When we introduce firm‐level heterogeneity, we show that financial pressure appears to be both statistically and quantitatively more important for bank‐dependent, small and privately held firms operating in periphery economies during the crisis.  相似文献   

10.
The performance of six classes of models in forecasting different types of economic series is evaluated in an extensive pseudo out‐of‐sample exercise. One of these forecasting models, regularized data‐rich model averaging (RDRMA), is new in the literature. The findings can be summarized in four points. First, RDRMA is difficult to beat in general and generates the best forecasts for real variables. This performance is attributed to the combination of regularization and model averaging, and it confirms that a smart handling of large data sets can lead to substantial improvements over univariate approaches. Second, the ARMA(1,1) model emerges as the best to forecast inflation changes in the short run, while RDRMA dominates at longer horizons. Third, the returns on the S&P 500 index are predictable by RDRMA at short horizons. Finally, the forecast accuracy and the optimal structure of the forecasting equations are quite unstable over time.  相似文献   

11.
Small area estimation typically requires model‐based methods that depend on isolating the contribution to overall population heterogeneity associated with group (i.e. small area) membership. One way of doing this is via random effects models with latent group effects. Alternatively, one can use an M‐quantile ensemble model that assigns indices to sampled individuals characterising their contribution to overall sample heterogeneity. These indices are then aggregated to form group effects. The aim of this article is to contrast these two approaches to characterising group effects and to illustrate them in the context of small area estimation. In doing so, we consider a range of different data types, including continuous data, count data and binary response data.  相似文献   

12.
基于灰色马尔可夫模型的物流园区物流量预测研究   总被引:1,自引:0,他引:1  
物流量预测是物流园区规划和管理的一项重要内容.本文将灰色系统理论与马尔可夫链相结合,介绍了灰色模型-马尔可夫链预测物流园区物流量的预测方法,并对某物流园区未来三年的物流量进行预测,结果表明比单纯利用灰色模型进行预测更加准确可靠.  相似文献   

13.
Two measures of an error‐ridden variable make it possible to solve the classical errors‐in‐Variable problem by using one measure as an instrument for the other. It is well known that a second IV‐estimate can be obtained by reversing the roles of the two measures. We explore the optimal linear combination of these two estimates. In a Monte Carlo study, we show that the gain in precision is significant. The proposed estimator also compares well with full information maximum likelihood under normality. We illustrate the method by estimating the capital elasticity in the Norwegian ICT‐industry.  相似文献   

14.
In many surveys, imputation procedures are used to account for non‐response bias induced by either unit non‐response or item non‐response. Such procedures are optimised (in terms of reducing non‐response bias) when the models include covariates that are highly predictive of both response and outcome variables. To achieve this, we propose a method for selecting sets of covariates used in regression imputation models or to determine imputation cells for one or more outcome variables, using the fraction of missing information (FMI) as obtained via a proxy pattern‐mixture (PMM) model as the key metric. In our variable selection approach, we use the PPM model to obtain a maximum likelihood estimate of the FMI for separate sets of candidate imputation models and look for the point at which changes in the FMI level off and further auxiliary variables do not improve the imputation model. We illustrate our proposed approach using empirical data from the Ohio Medicaid Assessment Survey and from the Service Annual Survey.  相似文献   

15.
《价值工程》2017,(5):35-37
为客观、合理地进行气膜薄壳钢筋混凝土穹顶储仓的工期预测,提出了基于PSO-SVR的预测方法。采用粒子群算法(particle swarm optimization,PSO)对支持向量回归机(support vector regression,SVR)的参数进行优化,并运用优化后的支持向量回归机对气膜薄壳钢筋混凝土穹顶储仓的工期进行预测。通过实例验证表明:PSO-SVR模型的预测效果优于遗传算法(GA-SVR)和串联型灰色神经网络(SGNN)。  相似文献   

16.
This study uses factor models to explain stock market returns in the Eastern European (EE) countries that joined the European Union (EU) in 2004. In line with other studies, we find that the market value of equity component in the Fama French (1993) three‐factor model performs poorly when applied to our emerging markets dataset. We propose a significant amendment to the standard three‐factor model by replacing the market value of equity factor with a term that proxies for accounting manipulation. We show that our three‐factor model is able to explain returns in the EE EU nations significantly better than the Fama French (1993) three‐factor model, hereby offering an alternative model for use in the numerous markets in which previous studies have found little correlation between market value of equity and equity returns.  相似文献   

17.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

18.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
This paper extends the knowledge‐based view of the firm by using relative measures of two fundamental classifications of knowledge as factors of production. It relates differences in relative quantities of these classifications of knowledge to the probability that a given stage of production is outsourced or de‐integrated. The probability of de‐integration of adjacent stages of production is found to increase on increasing reliance on tacit knowledge and decreasing reliance on encapsulated knowledge. The research was motivated by the belief that the cost and value of knowledge, as a factor of production, influence economic efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
The emphasis on renewable energy and concerns about the environment have led to large‐scale wind energy penetration worldwide. However, there are also significant challenges associated with the use of wind energy due to the intermittent and unstable nature of wind. High‐quality short‐term wind speed forecasting is critical to reliable and secure power system operations. This article begins with an overview of the current status of worldwide wind power developments and future trends. It then reviews some statistical short‐term wind speed forecasting models, including traditional time series approaches and more advanced space–time statistical models. It also discusses the evaluation of forecast accuracy, in particular, the need for realistic loss functions. New challenges in wind speed forecasting regarding ramp events and offshore wind farms are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号