首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

2.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

3.
This paper proposes a test for the null that, in a cointegrated panel, the long‐run correlation between the regressors and the error term is different from zero. As is well known, in such case the OLS estimator is T‐consistent, whereas it is ‐consistent when there is no endogeneity. Other estimators can be employed, such as the FM‐OLS, that are ‐consistent irrespective of whether exogeneity is present or not. Using the difference between the former and the latter estimator, we construct a test statistic which diverges at a rate under the null of endogeneity, whilst it is bounded under the alternative of exogeneity, and employ a randomization approach to carry out the test. Monte Carlo evidence shows that the test has the correct size and good power.  相似文献   

4.
In manufacturing industries, it is often seen that the bilateral specification limits corresponding to a particular quality characteristic are not symmetric with respect to the stipulated target. A unified superstructure of univariate process capability indices was specially designed for processes with asymmetric specification limits. However, as in most of the practical situations a process consists of a number of inter‐related quality characteristics, subsequently, a multivariate analogue of , which is called CM(u,v), was developed. In the present paper, we study some properties of CM(u,v) like threshold value and compatibility with the asymmetry in loss function. We also discuss estimation procedures for plug‐in estimators of some of the member indices of CM(u,v). Finally, the superstructure is applied to a numerical example to supplement the theory developed in this article.  相似文献   

5.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

6.
This paper provides a characterisation of the degree of cross‐sectional dependence in a two dimensional array, {xit,i = 1,2,...N;t = 1,2,...,T} in terms of the rate at which the variance of the cross‐sectional average of the observed data varies with N. Under certain conditions this is equivalent to the rate at which the largest eigenvalue of the covariance matrix of x t=(x1t,x2t,...,xNt)′ rises with N. We represent the degree of cross‐sectional dependence by α, which we refer to as the ‘exponent of cross‐sectional dependence’, and define it by the standard deviation, , where is a simple cross‐sectional average of xit. We propose bias corrected estimators, derive their asymptotic properties for α > 1/2 and consider a number of extensions. We include a detailed Monte Carlo simulation study supporting the theoretical results. We also provide a number of empirical applications investigating the degree of inter‐linkages of real and financial variables in the global economy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Employee ownership has been an area of significant practitioner and academic interest for the past four decades. Yet, empirical results on the relationship between employee ownership and firm performance remain mixed. To aggregate findings and provide potential direction for future theoretical development, we conducted a meta‐analysis of 102 samples representing 56,984 firms. Employee ownership has a small, but positive and statistically significant relation to firm performance ( = 0.04). The effect is generally positive for studies with different sampling designs (samples assessing change in performance pre‐employee–post‐employee ownership adoption or samples on firms with employee ownership), different performance operationalisation (efficiency or growth) and firm type (publicly held or privately held). Suggesting benefits of employee ownership in a variety of contexts, we found no differences in effects on performance in publicly held versus privately held firms, stock or stock option‐based ownership plans or differences in effects across different firm sizes (i.e. number of employees). We do find that the effect of employee ownership on performance has increased in studies over time and that studies with samples from outside the USA report stronger effects than those within. We also find little to no evidence of publication bias.  相似文献   

8.
EuroMInd‐ is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom‐up approach, pooling the density estimates of 11 GDP components, by output and expenditure type. The components' density estimates are obtained from a medium‐size dynamic factor model handling mixed frequencies of observation and ragged‐edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model parameters, and conditional simulation filters for simulating from the predictive distribution of GDP. Both algorithms process the data sequentially as they become available in real time. The GDP density estimates for the output and expenditure approach are combined using alternative weighting schemes and evaluated with different tests based on the probability integral transform and by applying scoring rules. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor‐augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor‐augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor‐augmented model selection criteria are inconsistent in circumstances where N is of larger order than , where N and T are the cross‐section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long‐horizon exchange rate forecasting using a recently proposed model with country‐specific idiosyncratic components from a panel of global exchange rates.  相似文献   

10.
This study extends the rate of convergence theorem of M‐estimators presented by van der Vaart and Wellner (weak convergence and empirical processes: with applications to statistics, Springer‐Verlag, Newyork, 1996) who gave a result of the form r  to a result of the form supnE | r , for any p≥1. This result is useful for deriving the moment convergence of the rescaled residual. An application to maximum likelihood estimators is discussed.  相似文献   

11.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

12.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

13.
We propose new summary statistics for intensity‐reweighted moment stationary point processes, that is, point processes with translation invariant n‐point correlation functions for all , that generalise the well known J‐, empty space, and spherical Palm contact distribution functions. We represent these statistics in terms of generating functionals and relate the inhomogeneous J‐function to the inhomogeneous reduced second moment function. Extensions to space time and marked point processes are briefly discussed.  相似文献   

14.
In this study, we consider residual‐based bootstrap methods to construct the confidence interval for structural impulse response functions in factor‐augmented vector autoregressions. In particular, we compare the bootstrap with factor estimation (Procedure A) with the bootstrap without factor estimation (Procedure B). Both procedures are asymptotically valid under the condition , where N and T are the cross‐sectional dimension and the time dimension, respectively. However, Procedure A is also valid even when with 0 ≤ c < because it accounts for the effect of the factor estimation errors on the impulse response function estimator. Our simulation results suggest that Procedure A achieves more accurate coverage rates than those of Procedure B, especially when N is much smaller than T. In the monetary policy analysis of Bernanke et al. (Quarterly Journal of Economics, 2005, 120(1), 387–422), the proposed methods can produce statistically different results.  相似文献   

15.
Gumbel’s Identity equates the Bonferroni sum with the k ‐ th binomial moment of the number of events Mn which occur, out of n arbitrary events. We provide a unified treatment of familiar probability bounds on a union of events by Bonferroni, Galambos–Rényi, Dawson–Sankoff, and Chung–Erdös, as well as less familiar bounds by Fréchet and Gumbel, all of which are expressed in terms of Bonferroni sums, by showing that all these arise as bounds in a more general setting in terms of binomial moments of a general non‐negative integer‐valued random variable. Use of Gumbel’s Identity then gives the inequalities in familiar Bonferroni sum form. This approach simplifies existing proofs. It also allows generalization of the results of Fréchet and Gumbel to give bounds on the probability that at least t of n events occur for any A further consequence of the approach is an improvement of a recent bound of Petrov which itself generalizes the Chung–Erdös bound.  相似文献   

16.
We consider the estimation of nonlinear models with mismeasured explanatory variables, when information on the marginal distribution of the true values of these variables is available. We derive a semi‐parametric MLE that is shown to be $\sqrt{n}$ consistent and asymptotically normally distributed. In a simulation experiment we find that the finite sample distribution of the estimator is close to the asymptotic approximation. The semi‐parametric MLE is applied to a duration model for AFDC welfare spells with misreported welfare benefits. The marginal distribution of the correctly measured welfare benefits is obtained from an administrative source. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
We investigate the prevalence and sources of reporting errors in 30,993 hypothesis tests from 370 articles in three top economics journals. We define reporting errors as inconsistencies between reported significance levels by means of eye‐catchers and calculated ‐values based on reported statistical values, such as coefficients and standard errors. While 35.8% of the articles contain at least one reporting error, only 1.3% of the investigated hypothesis tests are afflicted by reporting errors. For strong reporting errors for which either the eye‐catcher or the calculated ‐value signals statistical significance but the respective other one does not, the error rate is 0.5% for the investigated hypothesis tests corresponding to 21.6% of the articles having at least one strong reporting error. Our analysis suggests a bias in favor of errors for which eye‐catchers signal statistical significance but calculated ‐values do not. Survey responses from the respective authors, replications, and exploratory regression analyses indicate some solutions to mitigate the prevalence of reporting errors in future research.  相似文献   

18.
Finding a suitable representation of multivariate data is fundamental in many scientific disciplines. Projection pursuit ( PP) aims to extract interesting ‘non-Gaussian’ features from multivariate data, and tends to be computationally intensive even when applied to data of low dimension. In high-dimensional settings, a recent work (Bickel et al., 2018) on PP addresses asymptotic characterization and conjectures of the feasible projections as the dimension grows with sample size. To gain practical utility of and learn theoretical insights into PP in an integral way, data analytic tools needed to evaluate the behaviour of PP in high dimensions become increasingly desirable but are less explored in the literature. This paper focuses on developing computationally fast and effective approaches central to finite sample studies for (i) visualizing the feasibility of PP in extracting features from high-dimensional data, as compared with alternative methods like PCA and ICA, and (ii) assessing the plausibility of PP in cases where asymptotic studies are lacking or unavailable, with the goal of better understanding the practicality, limitation and challenge of PP in the analysis of large data sets.  相似文献   

19.
Several exact inference procedures for logistic regression require the simulation of a 0-1 dependent vector according to its conditional distribution, given the sufficient statistics for some nuisance parameters. This is viewed, in this work, as a sampling problem involving a population of n units, unequal selection probabilities and balancing constraints. The basis for this reformulation of exact inference is a proposition deriving the limit, as n goes to infinity, of the conditional distribution of the dependent vector given the logistic regression sufficient statistics. It is proposed to sample from this distribution using the cube sampling algorithm. The interest of this approach to exact inference is illustrated by tackling new problems. First it allows to carry out exact inference with continuous covariates. It is also useful for the investigation of a partial correlation between several 0-1 vectors. This is illustrated in an example dealing with presence-absence data in ecology.  相似文献   

20.
Jean‐Louis Bodin has had a distinguished career as an official statistician in the French and European statistical systems, where he has been working for 40 years. Born in 1941 in Bordeaux, the capital city of one of the most famous vineyards in the world, he became graduate of two French prestigious schools, the Ecole Polytechnique in 1963, then the ENSAE (Ecole Nationale de la Statistique et de l'Administration Economique) in 1966. After having spent 20 years in different positions in the French Statistical System, he dedicated the last 20 years of his career to international relations in official statistics and cooperation for strengthening statistical capacities of transition and developing countries. In particular, he was one of the main drafters of the UN Resolution on Fundamental Principles for Official Statistics and of the African Statistical Charter. He is also known for having created AFRISTAT, the Economic and Statistical Observatory of Sub‐Saharan African countries. He was also during years one of the French representatives in the UN Statistical Commission, the UN Conference of European Statisticians and the Statistical Committee Programme of the European Union. Jean‐Louis Bodin has played for 40 years a very active role in national and international statistical societies. He served on many disparate positions in the ISI family: executive director of the International Association of Survey Statisticians from 1981 to 1985 under the successive chairmanships of Gérard Théodore and Leslie Kish, founding member in 1985 with Vera Nyitrai of the International Association for Official Statistics and president of the Association from 1989 to 1991, secretary‐general of the National Organizing Committee of the 47th ISI Session held in Paris in August 1989, Chair of the Programme Coordinating Committee of the 50th ISI Session held in Beijing in August 1995, ISI president‐elect then president from 1997 to 2001, president of the Jury of the Mahalanobis Prize from 2002 to 2005. He was also president of the Société de Statistique de Paris in 1994. He received the certificate of Fellow of the American Statistical Association in 1996 and was a member of the ASA Committee for International Relations in Statistics from 1993 to 1997. He was bestowed in 2006 as a Chevalier de la Légion d'Honneur (Knight of the Legion of Honour) in France and in 1997 as a Kawaleski Orderu Zas?ugi(Knight of the Order of Merit) in Poland. He received the Medal of Statistical Merit in Vietnam in 2003 and the African Statistical Award bestowed by the UN Economic Conference for Africa in 2012. This conversation was held in June 2013 with Gilbert Saporta in Paris. Gilbert Saporta: Welcome to this interview, Jean‐Louis. I'm delighted that you could spend some time with me to discuss your career, your achievements and your views on some aspects of official statistics. It appears that your professional career as a French official statistician and your participation in ISI activities were tightly linked.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号