首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 615 毫秒
1.
This paper introduces some new elements to measure the skewness of a probability distribution, suggesting that a given distribution can have both positive and negative skewness, depending on the centred sub‐interval of the support set being observed. A skewness function for positive reals is defined, from which a bivariate index of positive–negative skewness is obtained. Certain interesting properties of this new index are studied, and they are also obtained for some common discrete distributions. We show the advantages of their use as a complement to the information derived by traditional measures of skewness.  相似文献   

2.
Understanding the effects of operational conditions and practices on productive efficiency can provide valuable economic and managerial insights. The conventional approach is to use a two-stage method where the efficiency estimates are regressed on contextual variables representing the operational conditions. The main problem of the two-stage approach is that it ignores the correlations between inputs and contextual variables. To address this shortcoming, we build on the recently developed regression interpretation of data envelopment analysis (DEA) to develop a new one-stage semi-nonparametric estimator that combines the nonparametric DEA-style frontier with a regression model of the contextual variables. The new method is referred to as stochastic semi-nonparametric envelopment of z variables data (StoNEZD). The StoNEZD estimator for the contextual variables is shown to be statistically consistent under less restrictive assumptions than those required by the two-stage DEA estimator. Further, the StoNEZD estimator is shown to be unbiased, asymptotically efficient, asymptotically normally distributed, and converge at the standard parametric rate of order n −1/2. Therefore, the conventional methods of statistical testing and confidence intervals apply for asymptotic inference. Finite sample performance of the proposed estimators is examined through Monte Carlo simulations.  相似文献   

3.
4.
This paper discusses, estimates and compares some microeconometric models for simultaneous discrete endogenous variables. The models are based on the assumption that observed endogenous variables represent the outcome of a static discrete game. I discuss models based on non-cooperative equilibrium concepts (Nash, Stackelberg), as well as models which presume Pareto optimality of observed outcomes. The models are estimated using data on the joint labor force participation decisions of husbands and wives in a sample of Dutch households.  相似文献   

5.
We consider estimation of panel data models with sample selection when the equation of interest contains endogenous explanatory variables as well as unobserved heterogeneity. Assuming that appropriate instruments are available, we propose several tests for selection bias and two estimation procedures that correct for selection in the presence of endogenous regressors. The tests are based on the fixed effects two-stage least squares estimator, thereby permitting arbitrary correlation between unobserved heterogeneity and explanatory variables. The first correction procedure is parametric and is valid under the assumption that the errors in the selection equation are normally distributed. The second procedure estimates the model parameters semiparametrically using series estimators. In the proposed testing and correction procedures, the error terms may be heterogeneously distributed and serially dependent in both selection and primary equations. Because these methods allow for a rather flexible structure of the error variance and do not impose any nonstandard assumptions on the conditional distributions of explanatory variables, they provide a useful alternative to the existing approaches presented in the literature.  相似文献   

6.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

7.
This study investigates the number of non-linear and multi-modal relationships between observed variables measuring the Growth-oriented Atmosphere. The sample (N = 726) represents employees of three vocational high schools in Finland. The first stage of analysis showed that only 22% of all dependencies between variables were purely linear. In the second stage two sub samples of the data were identified as linear and non-linear. Both bivariate correlations and confirmatory factor analysis (CFA) parameter estimates were found to be higher in the linear sub sample. Results showed that some of the highest bivariate correlations in both sub samples were explained via third variable in the non-linear Bayesian dependence modeling (BDM). Finally, the results of CFA and BDM led in different substantive interpretations in two out of four research questions concerning organizational growth.  相似文献   

8.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

9.
The present Monte Carlo compares the estimates produced by maximum likelihood (ML) and asymptotically distribution-free (ADF) methods. The study extends prior research by investigating the combined effects of sample size, magnitude of correlation among observed indicators, number of indicators, magnitude of skewness and kurtosis, and proportion of indicators with non-normal distributions. Results indicate that both ML and ADF showed little bias in estimates of factor loadings under all conditions studied. As the number of indicators in the model increased, ADF produced greater negative bias in estimates of uniquenesses than ML. In addition, the bias in standard errors for both ML and ADF estimation increased in models with more indicators, and this effect was more pronounced for ADF than ML. Increases in skewness and kurtosis resulted in greater underestimating of standard errors; ML standard errors showed greater bias than ADF under conditions of non-normality, and ML chi-square statistics were also inflated. However, when only half the indicators departed from normality, the inflation in ML chi-square decreased.  相似文献   

10.
《Journal of econometrics》2002,106(2):203-216
The coefficient matrix of a cointegrated first-order autoregression is estimated by reduced rank regression (RRR), depending on the larger canonical correlations and vectors of the first difference of the observed series and the lagged variables. In a suitable coordinate system the components of the least-squares (LS) estimator associated with the lagged nonstationary variables are of order 1/T, where T is the sample size, and are asymptotically functionals of a Brownian motion process; the components associated with the lagged stationary variables are of the order T−1/2 and are asymptotically normal. The components of the RRR estimator associated with the stationary part are asymptotically the same as for the LS estimator. Some components of the RRR estimator associated with nonstationary regressors have zero error to order 1/T and the other components have a more concentrated distribution than the corresponding components of the LS estimator.  相似文献   

11.
There are surveys that gather precise information on an outcome of interest, but measure continuous covariates by a discrete number of intervals, in which case the covariates are interval censored. For applications with a second independent dataset precisely measuring the covariates, but not the outcome, this paper introduces a semiparametrically efficient estimator for the coefficients in a linear regression model. The second sample serves to establish point identification. An empirical application investigating the relationship between income and body mass index illustrates the use of the estimator.  相似文献   

12.
We introduce a novel semi-parametric estimator of American option prices in discrete time. The specification is based on a parameterized stochastic discount factor and is nonparametric w.r.t. the historical dynamics of the Markovian state variables. The historical transition density estimator minimizes a distance built on the Kullback–Leibler divergence from a kernel transition density, subject to the no-arbitrage restrictions for a non-defaultable bond, the underlying asset and some American option prices. We use dynamic programming to make explicit the nonlinear restrictions on the Euclidean and functional parameters coming from option data. We study asymptotic and finite sample properties of the estimators.  相似文献   

13.
This paper considers joint estimation of long run equilibrium coefficients and parameters governing the short run dynamics of a fully parametric Gaussian cointegrated system formulated in continuous time. The model allows the stationary disturbances to be generated by a stochastic differential equation system and for the variables to be a mixture of stocks and flows. We derive a precise form for the exact discrete analogue of the continuous time model in triangular error correction form, which acts as the basis for frequency domain estimation of the unknown parameters using discrete time data. We formally establish the order of consistency and the asymptotic sampling properties of such an estimator. The estimator of the cointegrating parameters is shown to converge at the rate of the sample size to a mixed normal distribution, while that of the short run parameters converges at the rate of the square root of the sample size to a limiting normal distribution.  相似文献   

14.
Multivariate regression models for panel data   总被引:1,自引:0,他引:1  
The paper examines the relationship between heterogeneity bias and strict exogeneity in a distributed lag regression of y on x. The relationship is very strong when x is continuous, weaker when x is discrete, and non-existent as the order of the distributed lag becomes infinite. The individual specific random variables introduce nonlinearity and heteroskedasticity; so the paper provides an appropriate framework for the estimation of multivariate linear predictors. Restrictions are imposed using a minimum distance estimator. It is generally more efficient than the conventional estimators such as quasi-maximum likelihood. There are computationally simple generalizations of two- and three-stage least squares that achieve this efficiency gain. Some of these ideas are illustrated using the sample of Young Men in the National Longitudinal Survey. The paper reports regressions on the leads and lags of variables measuring union coverage, SMSA, and region. The results indicate that the leads and lags could have been generated just by a random intercept. This gives some support for analysis of covariance type estimates; these estimates indicate a substantial heterogeneity bias in the union, SMSA, and region coefficients.  相似文献   

15.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

16.
It is argued that univariate long memory estimates based on ex post data tend to underestimate the persistence of ex ante variables (and, hence, that of the ex post variables themselves) because of the presence of unanticipated shocks whose short‐run volatility masks the degree of long‐range dependence in the data. Empirical estimates of long‐range dependence in the Fisher equation are shown to manifest this problem and lead to an apparent imbalance in the memory characteristics of the variables in the Fisher equation. Evidence in support of this typical underestimation is provided by results obtained with inflation forecast survey data and by direct calculation of the finite sample biases. To address the problem of bias, the paper introduces a bivariate exact Whittle (BEW) estimator that explicitly allows for the presence of short memory noise in the data. The new procedure enhances the empirical capacity to separate low‐frequency behaviour from high‐frequency fluctuations, and it produces estimates of long‐range dependence that are much less biased when there is noise contaminated data. Empirical estimates from the BEW method suggest that the three Fisher variables are integrated of the same order, with memory parameter in the range (0.75, 1). Since the integration orders are balanced, the ex ante real rate has the same degree of persistence as expected inflation, thereby furnishing evidence against the existence of a (fractional) cointegrating relation among the Fisher variables and, correspondingly, showing little support for a long‐run form of Fisher hypothesis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
This paper replicates the Cornwell and Trumbull ( 1994 ) estimation of a crime model using panel data on 90 counties in North Carolina over the period 1981–1987. While the Between and Within estimates are replicated, the fixed effects 2SLS as well as the 2SLS estimates are not. In fact, the fixed effects 2SLS estimates turn out to be insignificant for all important deterrent variables as well as legal opportunity variables. We argue that the usual Hausman test, based on the difference between fixed effects and random effects, may lead to misleading inference when endogenous variables of the conventional simultaneous equation type are among the regressors. We estimate the model using random effects 2SLS and perform a Hausman test based on the difference between fixed effects 2SLS and random effects 2SLS. We cannot reject the consistency of the random effects 2SLS estimator and this estimator yields plausible and significant estimates of the crime model. This result should be tempered by the legitimacy of the chosen instruments. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
The nonnormal stable laws and Student t distributions are used to model the unconditional distribution of financial asset returns, as both models display heavy tails. The relevance of the two models is subject to debate because empirical estimates of the tail shape conditional on either model give conflicting signals. This stems from opposing bias terms. We exploit the biases to discriminate between the two distributions. A sign estimator for the second‐order scale parameter strengthens our results. Tail estimates based on asset return data match the bias induced by finite‐variance unconditional Student t data and the generalized autoregressive conditional heteroscedasticity process.  相似文献   

19.
Statistical tolerance intervals for discrete distributions are widely employed for assessing the magnitude of discrete characteristics of interest in applications like quality control, environmental monitoring, and the validation of medical devices. For such data problems, characterizing extreme counts or outliers is also of considerable interest. These applications typically use traditional discrete distributions, like the Poisson, binomial, and negative binomial. The discrete Pareto distribution is an alternative yet flexible model for count data that are heavily right‐skewed. Our contribution is the development of statistical tolerance limits for the discrete Pareto distribution as a strategy for characterizing the extremeness of observed counts in the tail. We discuss the coverage probabilities of our procedure in the broader context of known coverage issues for statistical intervals for discrete distributions. We address this issue by applying a bootstrap calibration to the confidence level of the asymptotic confidence interval for the discrete Pareto distribution's parameter. We illustrate our procedure on a dataset involving cyst formation in mice kidneys.  相似文献   

20.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号