首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 572 毫秒
1.
In this study, we consider residual‐based bootstrap methods to construct the confidence interval for structural impulse response functions in factor‐augmented vector autoregressions. In particular, we compare the bootstrap with factor estimation (Procedure A) with the bootstrap without factor estimation (Procedure B). Both procedures are asymptotically valid under the condition , where N and T are the cross‐sectional dimension and the time dimension, respectively. However, Procedure A is also valid even when with 0 ≤ c < because it accounts for the effect of the factor estimation errors on the impulse response function estimator. Our simulation results suggest that Procedure A achieves more accurate coverage rates than those of Procedure B, especially when N is much smaller than T. In the monetary policy analysis of Bernanke et al. (Quarterly Journal of Economics, 2005, 120(1), 387–422), the proposed methods can produce statistically different results.  相似文献   

2.
This paper provides a characterisation of the degree of cross‐sectional dependence in a two dimensional array, {xit,i = 1,2,...N;t = 1,2,...,T} in terms of the rate at which the variance of the cross‐sectional average of the observed data varies with N. Under certain conditions this is equivalent to the rate at which the largest eigenvalue of the covariance matrix of x t=(x1t,x2t,...,xNt)′ rises with N. We represent the degree of cross‐sectional dependence by α, which we refer to as the ‘exponent of cross‐sectional dependence’, and define it by the standard deviation, , where is a simple cross‐sectional average of xit. We propose bias corrected estimators, derive their asymptotic properties for α > 1/2 and consider a number of extensions. We include a detailed Monte Carlo simulation study supporting the theoretical results. We also provide a number of empirical applications investigating the degree of inter‐linkages of real and financial variables in the global economy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
We review some first‐order and higher‐order asymptotic techniques for M‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function (ψ) and its derivative with respect to the parameter play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.  相似文献   

4.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

5.
This paper proposes a test for the null that, in a cointegrated panel, the long‐run correlation between the regressors and the error term is different from zero. As is well known, in such case the OLS estimator is T‐consistent, whereas it is ‐consistent when there is no endogeneity. Other estimators can be employed, such as the FM‐OLS, that are ‐consistent irrespective of whether exogeneity is present or not. Using the difference between the former and the latter estimator, we construct a test statistic which diverges at a rate under the null of endogeneity, whilst it is bounded under the alternative of exogeneity, and employ a randomization approach to carry out the test. Monte Carlo evidence shows that the test has the correct size and good power.  相似文献   

6.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

7.
This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor‐augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor‐augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor‐augmented model selection criteria are inconsistent in circumstances where N is of larger order than , where N and T are the cross‐section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long‐horizon exchange rate forecasting using a recently proposed model with country‐specific idiosyncratic components from a panel of global exchange rates.  相似文献   

8.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

9.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

10.
Mixed causal–noncausal autoregressive (MAR) models have been proposed to model time series exhibiting nonlinear dynamics. Possible exogenous regressors are typically substituted into the error term to maintain the MAR structure of the dependent variable. We introduce a representation including these covariates called MARX to study their direct impact. The asymptotic distribution of the MARX parameters is derived for a class of non-Gaussian densities. For a Student likelihood, closed-form standard errors are provided. By simulations, we evaluate the MARX model selection procedure using information criteria. We examine the influence of the exchange rate and industrial production index on commodity prices.  相似文献   

11.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

12.
In manufacturing industries, it is often seen that the bilateral specification limits corresponding to a particular quality characteristic are not symmetric with respect to the stipulated target. A unified superstructure of univariate process capability indices was specially designed for processes with asymmetric specification limits. However, as in most of the practical situations a process consists of a number of inter‐related quality characteristics, subsequently, a multivariate analogue of , which is called CM(u,v), was developed. In the present paper, we study some properties of CM(u,v) like threshold value and compatibility with the asymmetry in loss function. We also discuss estimation procedures for plug‐in estimators of some of the member indices of CM(u,v). Finally, the superstructure is applied to a numerical example to supplement the theory developed in this article.  相似文献   

13.
We compare several representative sophisticated model averaging and variable selection techniques of forecasting stock returns. When estimated traditionally, our results confirm that the simple combination of individual predictors is superior. However, sophisticated models improve dramatically once we combine them with the historical average and take parameter instability into account. An equal weighted combination of the historical average with the standard multivariate predictive regression estimated using the average windows method, for example, achieves a statistically significant monthly out-of-sample of 1.10% and annual utility gains of 2.34%. We obtain similar gains for predicting future macroeconomic conditions.  相似文献   

14.
Consumption‐based equivalence scales are estimated by applying the extended partially linear model (EPLM) to the 1998 German Income and Consumption Survey (EVS). In this model the equivalence scales are identified from nonlinearities in household demand. The econometric framework should not therefore impose strong restrictions on the functional forms of household expenditure shares. The chosen semi‐parametric specification meets this requirement. It is flexible, it yields ‐consistent parameter estimates and it is consistent with consumer theory. Estimated equivalence scales are below or in the range of the expert equivalence scales of the German social benefits system. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
We investigate the prevalence and sources of reporting errors in 30,993 hypothesis tests from 370 articles in three top economics journals. We define reporting errors as inconsistencies between reported significance levels by means of eye‐catchers and calculated ‐values based on reported statistical values, such as coefficients and standard errors. While 35.8% of the articles contain at least one reporting error, only 1.3% of the investigated hypothesis tests are afflicted by reporting errors. For strong reporting errors for which either the eye‐catcher or the calculated ‐value signals statistical significance but the respective other one does not, the error rate is 0.5% for the investigated hypothesis tests corresponding to 21.6% of the articles having at least one strong reporting error. Our analysis suggests a bias in favor of errors for which eye‐catchers signal statistical significance but calculated ‐values do not. Survey responses from the respective authors, replications, and exploratory regression analyses indicate some solutions to mitigate the prevalence of reporting errors in future research.  相似文献   

16.
Univariate continuous distributions have three possible types of support exemplified by: the whole real line , , the semi‐finite interval and the bounded interval (0,1). This paper is about connecting distributions on these supports via ‘natural’ simple transformations in such a way that tail properties are preserved. In particular, this work is focussed on the case where the tails (at ±∞) of densities are heavy, decreasing as a (negative) power of their argument; connections are then especially elegant. At boundaries (0 and 1), densities behave conformably with a directly related dependence on power of argument. The transformation from (0,1) to is the standard odds transformation. The transformation from to is a novel identity‐minus‐reciprocal transformation. The main points of contact with existing distributions are with the transformations involved in the Birnbaum–Saunders distribution and, especially, the Johnson family of distributions. Relationships between various other existing and newly proposed distributions are explored.  相似文献   

17.
Gumbel’s Identity equates the Bonferroni sum with the k ‐ th binomial moment of the number of events Mn which occur, out of n arbitrary events. We provide a unified treatment of familiar probability bounds on a union of events by Bonferroni, Galambos–Rényi, Dawson–Sankoff, and Chung–Erdös, as well as less familiar bounds by Fréchet and Gumbel, all of which are expressed in terms of Bonferroni sums, by showing that all these arise as bounds in a more general setting in terms of binomial moments of a general non‐negative integer‐valued random variable. Use of Gumbel’s Identity then gives the inequalities in familiar Bonferroni sum form. This approach simplifies existing proofs. It also allows generalization of the results of Fréchet and Gumbel to give bounds on the probability that at least t of n events occur for any A further consequence of the approach is an improvement of a recent bound of Petrov which itself generalizes the Chung–Erdös bound.  相似文献   

18.
Junius and Oosterhaven (2003) Junius, T. and Oosterhaven, J. 2003. The solution of updating or regionalizing a matrix with both positive and negative entries. Economic Systems Research, 15: 8796. [Taylor & Francis Online] [Google Scholar] developed the GRAS algorithm that minimizes the information gain when updating input–output tables with both positive and negative signs. Jackson and Murray (2004) Jackson, R. W. and Murray, A. T. 2004. Alternative input–output matrix updating formulations. Economic Systems Research, 16: 135148. [Taylor & Francis Online] [Google Scholar], however, claim that minimizing squared differences in coefficients produces a smaller information gain, which is theoretically impossible. In this comment, calculation errors are sorted out from differences in measures, and it is shown that the information gain needs to be taken in absolute terms when increasing and decreasing cell values occur together. The numerical results show that GRAS outperforms both sign-preserving alternatives in all but one comparison of lesser economic importance. Moreover, as opposed to the result of Jackson and Murray, they show that minimizing absolute differences consistently outperforms minimizing squared differences, which overweighs large errors in small coefficients.  相似文献   

19.
We examine the long‐run GDP impacts of changes in total government expenditure and in the shares of different spending categories for a sample of OECD countries since the 1970s, taking account of methods of financing expenditure changes and possible endogenous relationships. We provide more systematic empirical evidence than available hitherto for OECD countries, obtaining strong evidence that reallocating total spending towards infrastructure and education is positive for long‐run output levels. Reallocating spending towards social welfare (and away from all other expenditure categories pro‐rata) may be associated with modest negative effects on output in the long run.  相似文献   

20.
In dynamic panel regression, when the variance ratio of individual effects to disturbance is large, the system‐GMM estimator will have large asymptotic variance and poor finite sample performance. To deal with this variance ratio problem, we propose a residual‐based instrumental variables (RIV) estimator, which uses the residual from regressing Δyi,t?1 on as the instrument for the level equation. The RIV estimator proposed is consistent and asymptotically normal under general assumptions. More importantly, its asymptotic variance is almost unaffected by the variance ratio of individual effects to disturbance. Monte Carlo simulations show that the RIV estimator has better finite sample performance compared to alternative estimators. The RIV estimator generates less finite sample bias than difference‐GMM, system‐GMM, collapsing‐GMM and Level‐IV estimators in most cases. Under RIV estimation, the variance ratio problem is well controlled, and the empirical distribution of its t‐statistic is similar to the standard normal distribution for moderate sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号