首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 845 毫秒
1.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

2.
We review some first‐order and higher‐order asymptotic techniques for M‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function (ψ) and its derivative with respect to the parameter play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.  相似文献   

3.
Univariate continuous distributions are one of the fundamental components on which statistical modelling, ancient and modern, frequentist and Bayesian, multi‐dimensional and complex, is based. In this article, I review and compare some of the main general techniques for providing families of typically unimodal distributions on with one or two, or possibly even three, shape parameters, controlling skewness and/or tailweight, in addition to their all‐important location and scale parameters. One important and useful family is comprised of the ‘skew‐symmetric’ distributions brought to prominence by Azzalini. As these are covered in considerable detail elsewhere in the literature, I focus more on their complements and competitors. Principal among these are distributions formed by transforming random variables, by what I call ‘transformation of scale’—including two‐piece distributions—and by probability integral transformation of non‐uniform random variables. I also treat briefly the issues of multi‐variate extension, of distributions on subsets of and of distributions on the circle. The review and comparison is not comprehensive, necessarily being selective and therefore somewhat personal. © 2014 The Authors. International Statistical Review © 2014 International Statistical Institute  相似文献   

4.
In manufacturing industries, it is often seen that the bilateral specification limits corresponding to a particular quality characteristic are not symmetric with respect to the stipulated target. A unified superstructure of univariate process capability indices was specially designed for processes with asymmetric specification limits. However, as in most of the practical situations a process consists of a number of inter‐related quality characteristics, subsequently, a multivariate analogue of , which is called CM(u,v), was developed. In the present paper, we study some properties of CM(u,v) like threshold value and compatibility with the asymmetry in loss function. We also discuss estimation procedures for plug‐in estimators of some of the member indices of CM(u,v). Finally, the superstructure is applied to a numerical example to supplement the theory developed in this article.  相似文献   

5.
This paper provides a characterisation of the degree of cross‐sectional dependence in a two dimensional array, {xit,i = 1,2,...N;t = 1,2,...,T} in terms of the rate at which the variance of the cross‐sectional average of the observed data varies with N. Under certain conditions this is equivalent to the rate at which the largest eigenvalue of the covariance matrix of x t=(x1t,x2t,...,xNt)′ rises with N. We represent the degree of cross‐sectional dependence by α, which we refer to as the ‘exponent of cross‐sectional dependence’, and define it by the standard deviation, , where is a simple cross‐sectional average of xit. We propose bias corrected estimators, derive their asymptotic properties for α > 1/2 and consider a number of extensions. We include a detailed Monte Carlo simulation study supporting the theoretical results. We also provide a number of empirical applications investigating the degree of inter‐linkages of real and financial variables in the global economy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
EuroMInd‐ is a density estimate of monthly gross domestic product (GDP) constructed according to a bottom‐up approach, pooling the density estimates of 11 GDP components, by output and expenditure type. The components' density estimates are obtained from a medium‐size dynamic factor model handling mixed frequencies of observation and ragged‐edged data structures. They reflect both parameter and filtering uncertainty and are obtained by implementing a bootstrap algorithm for simulating from the distribution of the maximum likelihood estimators of the model parameters, and conditional simulation filters for simulating from the predictive distribution of GDP. Both algorithms process the data sequentially as they become available in real time. The GDP density estimates for the output and expenditure approach are combined using alternative weighting schemes and evaluated with different tests based on the probability integral transform and by applying scoring rules. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Employee ownership has been an area of significant practitioner and academic interest for the past four decades. Yet, empirical results on the relationship between employee ownership and firm performance remain mixed. To aggregate findings and provide potential direction for future theoretical development, we conducted a meta‐analysis of 102 samples representing 56,984 firms. Employee ownership has a small, but positive and statistically significant relation to firm performance ( = 0.04). The effect is generally positive for studies with different sampling designs (samples assessing change in performance pre‐employee–post‐employee ownership adoption or samples on firms with employee ownership), different performance operationalisation (efficiency or growth) and firm type (publicly held or privately held). Suggesting benefits of employee ownership in a variety of contexts, we found no differences in effects on performance in publicly held versus privately held firms, stock or stock option‐based ownership plans or differences in effects across different firm sizes (i.e. number of employees). We do find that the effect of employee ownership on performance has increased in studies over time and that studies with samples from outside the USA report stronger effects than those within. We also find little to no evidence of publication bias.  相似文献   

8.
《Statistica Neerlandica》2018,72(2):126-156
In this paper, we study application of Le Cam's one‐step method to parameter estimation in ordinary differential equation models. This computationally simple technique can serve as an alternative to numerical evaluation of the popular non‐linear least squares estimator, which typically requires the use of a multistep iterative algorithm and repetitive numerical integration of the ordinary differential equation system. The one‐step method starts from a preliminary ‐consistent estimator of the parameter of interest and next turns it into an asymptotic (as the sample size n ) equivalent of the least squares estimator through a numerically straightforward procedure. We demonstrate performance of the one‐step estimator via extensive simulations and real data examples. The method enables the researcher to obtain both point and interval estimates. The preliminary ‐consistent estimator that we use depends on non‐parametric smoothing, and we provide a data‐driven methodology for choosing its tuning parameter and support it by theory. An easy implementation scheme of the one‐step method for practical use is pointed out.  相似文献   

9.
This study extends the rate of convergence theorem of M‐estimators presented by van der Vaart and Wellner (weak convergence and empirical processes: with applications to statistics, Springer‐Verlag, Newyork, 1996) who gave a result of the form r  to a result of the form supnE | r , for any p≥1. This result is useful for deriving the moment convergence of the rescaled residual. An application to maximum likelihood estimators is discussed.  相似文献   

10.
This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor‐augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor‐augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor‐augmented model selection criteria are inconsistent in circumstances where N is of larger order than , where N and T are the cross‐section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long‐horizon exchange rate forecasting using a recently proposed model with country‐specific idiosyncratic components from a panel of global exchange rates.  相似文献   

11.
In this article, we study a new class of semiparametric instrumental variables models, in which the structural function has a partially varying coefficient functional form. Under this specification, the model is linear in the endogenous/exogenous components with unknown constant or functional coefficients. As a result, the ill‐posed inverse problem in a general non‐parametric model with continuous endogenous variables can be avoided. We propose a three‐step estimation procedure for estimating both constant and functional coefficients and establish their asymptotic properties such as consistency and asymptotic normality. We develop consistent estimators for their error variances. We demonstrate that the constant coefficient estimators achieve the optimal ‐convergence rate, and the functional coefficient estimators are oracle. In addition, efficiency issue of the parameter estimation is discussed and a simple efficient estimator is proposed. The proposed procedure is illustrated via a Monte Carlo simulation and an application to returns to education.  相似文献   

12.
We propose new summary statistics for intensity‐reweighted moment stationary point processes, that is, point processes with translation invariant n‐point correlation functions for all , that generalise the well known J‐, empty space, and spherical Palm contact distribution functions. We represent these statistics in terms of generating functionals and relate the inhomogeneous J‐function to the inhomogeneous reduced second moment function. Extensions to space time and marked point processes are briefly discussed.  相似文献   

13.
Gumbel’s Identity equates the Bonferroni sum with the k ‐ th binomial moment of the number of events Mn which occur, out of n arbitrary events. We provide a unified treatment of familiar probability bounds on a union of events by Bonferroni, Galambos–Rényi, Dawson–Sankoff, and Chung–Erdös, as well as less familiar bounds by Fréchet and Gumbel, all of which are expressed in terms of Bonferroni sums, by showing that all these arise as bounds in a more general setting in terms of binomial moments of a general non‐negative integer‐valued random variable. Use of Gumbel’s Identity then gives the inequalities in familiar Bonferroni sum form. This approach simplifies existing proofs. It also allows generalization of the results of Fréchet and Gumbel to give bounds on the probability that at least t of n events occur for any A further consequence of the approach is an improvement of a recent bound of Petrov which itself generalizes the Chung–Erdös bound.  相似文献   

14.
We consider the estimation of the conditional mode function when the covariates take values in some abstract function space. The main goal of this paper was to establish the almost complete convergence and the asymptotic normality of the kernel estimator of the conditional mode when the process is assumed to be strongly mixing and under the concentration property over the functional regressors. Some applications are given. This approach can be applied in time‐series analysis to the prediction and confidence band building. We illustrate our methodology by using El Nio data.  相似文献   

15.
We consider the estimation of nonlinear models with mismeasured explanatory variables, when information on the marginal distribution of the true values of these variables is available. We derive a semi‐parametric MLE that is shown to be $\sqrt{n}$ consistent and asymptotically normally distributed. In a simulation experiment we find that the finite sample distribution of the estimator is close to the asymptotic approximation. The semi‐parametric MLE is applied to a duration model for AFDC welfare spells with misreported welfare benefits. The marginal distribution of the correctly measured welfare benefits is obtained from an administrative source. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Univariate continuous distributions have three possible types of support exemplified by: the whole real line , , the semi‐finite interval and the bounded interval (0,1). This paper is about connecting distributions on these supports via ‘natural’ simple transformations in such a way that tail properties are preserved. In particular, this work is focussed on the case where the tails (at ±∞) of densities are heavy, decreasing as a (negative) power of their argument; connections are then especially elegant. At boundaries (0 and 1), densities behave conformably with a directly related dependence on power of argument. The transformation from (0,1) to is the standard odds transformation. The transformation from to is a novel identity‐minus‐reciprocal transformation. The main points of contact with existing distributions are with the transformations involved in the Birnbaum–Saunders distribution and, especially, the Johnson family of distributions. Relationships between various other existing and newly proposed distributions are explored.  相似文献   

17.
We consider Grenander‐type estimators for a monotone function , obtained as the slope of a concave (convex) estimate of the primitive of λ. Our main result is a central limit theorem for the Hellinger loss, which applies to estimation of a probability density, a regression function or a failure rate. In the case of density estimation, the limiting variance of the Hellinger loss turns out to be independent of λ.  相似文献   

18.
Single‐index models are popular regression models that are more flexible than linear models and still maintain more structure than purely nonparametric models. We consider the problem of estimating the regression parameters under a monotonicity constraint on the unknown link function. In contrast to the standard approach of using smoothing techniques, we review different “non‐smooth” estimators that avoid the difficult smoothing parameter selection. For about 30 years, one has had the conjecture that the profile least squares estimator is an ‐consistent estimator of the regression parameter, but the only non‐smooth argmin/argmax estimators that are actually known to achieve this ‐rate are not based on the nonparametric least squares estimator of the link function. However, solving a score equation corresponding to the least squares approach results in ‐consistent estimators. We illustrate the good behavior of the score approach via simulations. The connection with the binary choice and current status linear regression models is also discussed.  相似文献   

19.
We investigate the prevalence and sources of reporting errors in 30,993 hypothesis tests from 370 articles in three top economics journals. We define reporting errors as inconsistencies between reported significance levels by means of eye‐catchers and calculated ‐values based on reported statistical values, such as coefficients and standard errors. While 35.8% of the articles contain at least one reporting error, only 1.3% of the investigated hypothesis tests are afflicted by reporting errors. For strong reporting errors for which either the eye‐catcher or the calculated ‐value signals statistical significance but the respective other one does not, the error rate is 0.5% for the investigated hypothesis tests corresponding to 21.6% of the articles having at least one strong reporting error. Our analysis suggests a bias in favor of errors for which eye‐catchers signal statistical significance but calculated ‐values do not. Survey responses from the respective authors, replications, and exploratory regression analyses indicate some solutions to mitigate the prevalence of reporting errors in future research.  相似文献   

20.
The focus of this article is modeling the magnitude and duration of monotone periods of log‐returns. For this, we propose a new bivariate law assuming that the probabilistic framework over the magnitude and duration is based on the joint distribution of (X,N), where N is geometric distributed and X is the sum of an identically distributed sequence of inverse‐Gaussian random variables independent of N. In this sense, X and N represent the magnitude and duration of the log‐returns, respectively, and the magnitude comes from an infinite mixture of inverse‐Gaussian distributions. This new model is named bivariate inverse‐Gaussian geometric ( in short) law. We provide statistical properties of the model and explore stochastic representations. In particular, we show that the is infinitely divisible, and with this, an induced Lévy process is proposed and studied in some detail. Estimation of the parameters is performed via maximum likelihood, and Fisher's information matrix is obtained. An empirical illustration to the log‐returns of Tyco International stock demonstrates the superior performance of the law compared to an existing model. We expect that the proposed law can be considered as a powerful tool in the modeling of log‐returns and other episodes analyses such as water resources management, risk assessment, and civil engineering projects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号