首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 848 毫秒
1.
In this study, we consider residual‐based bootstrap methods to construct the confidence interval for structural impulse response functions in factor‐augmented vector autoregressions. In particular, we compare the bootstrap with factor estimation (Procedure A) with the bootstrap without factor estimation (Procedure B). Both procedures are asymptotically valid under the condition , where N and T are the cross‐sectional dimension and the time dimension, respectively. However, Procedure A is also valid even when with 0 ≤ c < because it accounts for the effect of the factor estimation errors on the impulse response function estimator. Our simulation results suggest that Procedure A achieves more accurate coverage rates than those of Procedure B, especially when N is much smaller than T. In the monetary policy analysis of Bernanke et al. (Quarterly Journal of Economics, 2005, 120(1), 387–422), the proposed methods can produce statistically different results.  相似文献   

2.
This paper provides a characterisation of the degree of cross‐sectional dependence in a two dimensional array, {xit,i = 1,2,...N;t = 1,2,...,T} in terms of the rate at which the variance of the cross‐sectional average of the observed data varies with N. Under certain conditions this is equivalent to the rate at which the largest eigenvalue of the covariance matrix of x t=(x1t,x2t,...,xNt)′ rises with N. We represent the degree of cross‐sectional dependence by α, which we refer to as the ‘exponent of cross‐sectional dependence’, and define it by the standard deviation, , where is a simple cross‐sectional average of xit. We propose bias corrected estimators, derive their asymptotic properties for α > 1/2 and consider a number of extensions. We include a detailed Monte Carlo simulation study supporting the theoretical results. We also provide a number of empirical applications investigating the degree of inter‐linkages of real and financial variables in the global economy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Mixed causal–noncausal autoregressive (MAR) models have been proposed to model time series exhibiting nonlinear dynamics. Possible exogenous regressors are typically substituted into the error term to maintain the MAR structure of the dependent variable. We introduce a representation including these covariates called MARX to study their direct impact. The asymptotic distribution of the MARX parameters is derived for a class of non-Gaussian densities. For a Student likelihood, closed-form standard errors are provided. By simulations, we evaluate the MARX model selection procedure using information criteria. We examine the influence of the exchange rate and industrial production index on commodity prices.  相似文献   

4.
We compare several representative sophisticated model averaging and variable selection techniques of forecasting stock returns. When estimated traditionally, our results confirm that the simple combination of individual predictors is superior. However, sophisticated models improve dramatically once we combine them with the historical average and take parameter instability into account. An equal weighted combination of the historical average with the standard multivariate predictive regression estimated using the average windows method, for example, achieves a statistically significant monthly out-of-sample of 1.10% and annual utility gains of 2.34%. We obtain similar gains for predicting future macroeconomic conditions.  相似文献   

5.
EuroMInd‐ is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom‐up approach, pooling the density estimates of 11 GDP components, by output and expenditure type. The components' density estimates are obtained from a medium‐size dynamic factor model handling mixed frequencies of observation and ragged‐edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model parameters, and conditional simulation filters for simulating from the predictive distribution of GDP. Both algorithms process the data sequentially as they become available in real time. The GDP density estimates for the output and expenditure approach are combined using alternative weighting schemes and evaluated with different tests based on the probability integral transform and by applying scoring rules. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

7.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

8.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

9.
The presence of unobserved heterogeneity and its likely detrimental effect on inference has recently motivated the use of factor‐augmented panel regression models. The workhorse of this literature is based on first estimating the unknown factors using the cross‐section averages of the observables, and then applying ordinary least squares conditional on the first‐step factor estimates. This is the common correlated effects (CCE) approach, the existing asymptotic theory for which is based on the requirement that both the number of time series observations, T, and the number of cross‐section units, N, tend to infinity. The obvious implication of this theory for empirical work is that both N and T should be large, which means that CCE is impossible for the typical micro panel where only N is large. In the current paper, we put the existing CCE theory and its implications to a test. This is done by developing a new theory that enables T to be fixed. The results show that many of the previously derived large‐T results hold even if T is fixed. In particular, the pooled CCE estimator is still consistent and asymptotically normal, which means that CCE is more applicable than previously thought. In fact, not only do we allow T to be fixed, but the conditions placed on the time series properties of the factors and idiosyncratic errors are also much more general than those considered previously.  相似文献   

10.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

11.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

12.
The well-developed ETS (ExponenTial Smoothing, or Error, Trend, Seasonality) method incorporates a family of exponential smoothing models in state space representation and is widely used for automatic forecasting. The existing ETS method uses information criteria for model selection by choosing an optimal model with the smallest information criterion among all models fitted to a given time series. The ETS method under such a model selection scheme suffers from computational complexity when applied to large-scale time series data. To tackle this issue, we propose an efficient approach to ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series. We provide a simulation study to show the model selection ability of the proposed approach on simulated data. We evaluate our approach on the widely used M4 forecasting competition dataset in terms of both point forecasts and prediction intervals. To demonstrate the practical value of our method, we showcase the performance improvements from our approach on a monthly hospital dataset.  相似文献   

13.
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and the composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors, and yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Weighting the data by their properties when constructing the factors also lead to improved forecasts. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have stronger loadings on some groups of series than others. It thus allows us to better understand the properties of the principal components estimator in empirical applications.  相似文献   

14.
This article shows that spurious regression results can occur for a fixed effects model with weak time series variation in the regressor and/or strong time series variation in the regression errors when the first‐differenced and Within‐OLS estimators are used. Asymptotic properties of these estimators and the related t‐tests and model selection criteria are studied by sending the number of cross‐sectional observations to infinity. This article shows that the first‐differenced and Within‐OLS estimators diverge in probability, that the related t‐tests are inconsistent, that R2s converge to zero in probability and that AIC and BIC diverge to ?∞ in probability. The results of the article warn that one should not jump to the use of fixed effects regressions without considering the degree of time series variations in the data.  相似文献   

15.
We establish the consistency of the selection procedures embodied in PcGets, and compare their performance with other model selection criteria in linear regressions. The significance levels embedded in the PcGets Liberal and Conservative algorithms coincide in very large samples with those implicit in the Hannan–Quinn (HQ) and Schwarz information criteria (SIC), respectively. Thus, both PcGets rules are consistent under the same conditions as HQ and SIC. However, PcGets has a rather different finite‐sample behaviour. Pre‐selecting to remove many of the candidate variables is confirmed as enhancing the performance of SIC.  相似文献   

16.
This article examines volatility models for modeling and forecasting the Standard & Poor 500 (S&P 500) daily stock index returns, including the autoregressive moving average, the Taylor and Schwert generalized autoregressive conditional heteroscedasticity (GARCH), the Glosten, Jagannathan and Runkle GARCH and asymmetric power ARCH (APARCH) with the following conditional distributions: normal, Student's t and skewed Student's t‐distributions. In addition, we undertake unit root (augmented Dickey–Fuller and Phillip–Perron) tests, co‐integration test and error correction model. We study the stationary APARCH (p) model with parameters, and the uniform convergence, strong consistency and asymptotic normality are prove under simple ordered restriction. In fitting these models to S&P 500 daily stock index return data over the period 1 January 2002 to 31 December 2012, we found that the APARCH model using a skewed Student's t‐distribution is the most effective and successful for modeling and forecasting the daily stock index returns series. The results of this study would be of great value to policy makers and investors in managing risk in stock markets trading.  相似文献   

17.
Ordinary least squares estimation of an impulse‐indicator coefficient is inconsistent, but its variance can be consistently estimated. Although the ratio of the inconsistent estimator to its standard error has a t‐distribution, that test is inconsistent: one solution is to form an index of indicators. We provide Monte Carlo evidence that including a plethora of indicators need not distort model selection, permitting the use of many dummies in a general‐to‐specific framework. Although White's (1980) heteroskedasticity test is incorrectly sized in that context, we suggest an easy alteration. Finally, a possible modification to impulse ‘intercept corrections’ is considered.  相似文献   

18.
This paper discusses pooling versus model selection for nowcasting with large datasets in the presence of model uncertainty. In practice, nowcasting a low‐frequency variable with a large number of high‐frequency indicators should account for at least two data irregularities: (i) unbalanced data with missing observations at the end of the sample due to publication delays; and (ii) different sampling frequencies of the data. Two model classes suited in this context are factor models based on large datasets and mixed‐data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst other things, the factor estimation method and the number of factors, lag length and indicator selection. Thus there are many sources of misspecification when selecting a particular model, and an alternative would be pooling over a large set of different model specifications. We evaluate the relative performance of pooling and model selection for nowcasting quarterly GDP for six large industrialized countries. We find that the nowcast performance of single models varies considerably over time, in line with the forecasting literature. Model selection based on sequential application of information criteria can outperform benchmarks. However, the results highly depend on the selection method chosen. In contrast, pooling of nowcast models provides an overall very stable nowcast performance over time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
We perform a fully real‐time nowcasting (forecasting) exercise of US GDP growth using Giannone et al.'s (2008) factor model framework. To this end, we have constructed a real‐time database of vintages from 1997 to 2010 for a panel of variables, enabling us to reproduce, for any given day in that range, the exact information that was available to a real‐time forecaster. We track the daily evolution of the model performance along the real‐time data flow and find that the precision of the nowcasts increases with information releases and the model fares well relative to the Survey of Professional Forecasters (SPF).  相似文献   

20.
We review some first‐order and higher‐order asymptotic techniques for M‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function (ψ) and its derivative with respect to the parameter play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号