首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Typically, a Poisson model is assumed for count data. In many cases, there are many zeros in the dependent variable, thus the mean is not equal to the variance value of the dependent variable. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, we suggest using a hurdle‐generalized Poisson regression model. Furthermore, the response variable in such cases is censored for some values because of some big values. A censored hurdle‐generalized Poisson regression model is introduced on count data with many zeros in this paper. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness‐of‐fit for the regression model is examined. An example and a simulation will be used to illustrate the effects of right censoring on the parameter estimation and their standard errors.  相似文献   

2.
We consider the Case 1 interval censoring approach for right‐censored survival data. An important feature of the model is that right‐censored event times are not observed exactly, but at some inspection times. The model covers as particular cases right‐censored data, current status data, and life table survival data with a single inspection time. We discuss the nonparametric estimation approach and consider three nonparametric estimators for the survival function of failure time: maximum likelihood, pseudolikelihood, and the naïve estimator. We establish strong consistency of the estimators with the L1 rate of convergence. Simulation results confirm consistency of the estimators.  相似文献   

3.
This paper considers estimation of censored panel‐data models with individual‐specific slope heterogeneity. The slope heterogeneity may be random (random slopes model) or related to covariates (correlated random slopes model). Maximum likelihood and censored least‐absolute deviations estimators are proposed for both models. The estimators are simple to implement and, in the case of maximum likelihood, lead to straightforward estimation of partial effects. The rescaled bootstrap suggested by Andrews (Econometrica 2000; 68 : 399–405) is used to deal with the possibility of variance parameters being equal to zero. The methodology is applied to an empirical study of Dutch household portfolio choice, where the outcome variable (portfolio share in safe assets) has corner solutions at zero and one. As predicted by economic theory, there is strong evidence of correlated random slopes for the age profiles, indicating a heterogeneous age profile of portfolio adjustment that varies significantly with other household characteristics. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing distribution is discrete, resulting in a finite mixture model formulation. An EM algorithm for estimation is described, and the algorithm is applied to data on customer purchases of books offered through direct mail. Our model is compared empirically to a number of other approaches that deal with heterogeneity in Poisson regression models.  相似文献   

5.
Many estimation methods of truncated and censored regression models such as the maximum likelihood and symmetrically censored least squares (SCLS) are sensitive to outliers and data contamination as we document. Therefore, we propose a semiparametric general trimmed estimator (GTE) of truncated and censored regression, which is highly robust but relatively imprecise. To improve its performance, we also propose data-adaptive and one-step trimmed estimators. We derive the robust and asymptotic properties of all proposed estimators and show that the one-step estimators (e.g., one-step SCLS) are as robust as GTE and are asymptotically equivalent to the original estimator (e.g., SCLS). The finite-sample properties of existing and proposed estimators are studied by means of Monte Carlo simulations.  相似文献   

6.
Capture–Recapture methods aim to estimate the size of an elusive target population. Each member of the target population carries a count of identifications by some identifying mechanism—the number of times it has been identified during the observational period. Only positive counts are observed and inference needs to be based on the observed count distribution. A widely used assumption for the count distribution is a Poisson mixture. If the mixing distribution can be described by an exponential density, the geometric distribution arises as the marginal. This note discusses population size estimation on the basis of the zero-truncated geometric (a geometric again itself). In addition, population heterogeneity is considered for the geometric. Chao’s estimator is developed for the mixture of geometric distributions and provides a lower bound estimator which is valid under arbitrary mixing on the parameter of the geometric. However, Chao’s estimator is also known for its relatively large variance (if compared to the maximum likelihood estimator). Another estimator based on a censored geometric likelihood is suggested which uses the entire sample information but is less affected by model misspecifications. Simulation studies illustrate that the proposed censored estimator comprises a good compromise between the maximum likelihood estimator and Chao’s estimator, e.g. between efficiency and bias.  相似文献   

7.
Small area estimation typically requires model‐based methods that depend on isolating the contribution to overall population heterogeneity associated with group (i.e. small area) membership. One way of doing this is via random effects models with latent group effects. Alternatively, one can use an M‐quantile ensemble model that assigns indices to sampled individuals characterising their contribution to overall sample heterogeneity. These indices are then aggregated to form group effects. The aim of this article is to contrast these two approaches to characterising group effects and to illustrate them in the context of small area estimation. In doing so, we consider a range of different data types, including continuous data, count data and binary response data.  相似文献   

8.
交互效应面板数据模型在社会经济问题的实证分析中具有很强的适用性,但现有研究主要集中于线性面板模型。本文将交互效应引入非线性的面板截取模型,并基于ECM算法,建立了有效估计量和识别程序。基于不同因子类型的仿真实验结果显示,ECM算法可以很好地识别面板截取样本中的非观测因子。ECM估计量具有良好的有限样本性质,与其他估计量相比具有更小的偏误和更快的收敛速度。尤其是当共同因子为低频平滑因子时,其表现最为理想。  相似文献   

9.
In this paper, we study model selection and model averaging for quantile regression with randomly right censored response. We consider a semi-parametric censored quantile regression model without distribution assumptions. Under general conditions, a focused information criterion and a frequentist model averaging estimator are proposed, and theoretical properties of the proposed methods are established. The performances of the procedures are illustrated by extensive simulations and the primary biliary cirrhosis data.  相似文献   

10.
In the current paper, we propose a new utility‐consistent modeling framework to explicitly link a count data model with an event‐type multinomial‐choice model. The proposed framework uses a multinomial probit kernel for the event‐type choice model and introduces unobserved heterogeneity in both the count and discrete‐choice components. Additionally, this paper establishes important new results regarding the distribution of the maximum of multivariate normally distributed variables, which form the basis to embed the multinomial probit model within a joint modeling system for multivariate count data. The model is applied to analyzing out‐of‐home non‐work episodes pursued by workers, using data from the National Household Travel Survey. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Several authors have proposed stochastic and non‐stochastic approximations to the maximum likelihood estimate (MLE) for Gibbs point processes in modelling spatial point patterns with pairwise interactions. The approximations are necessary because of the difficulty of evaluating the normalizing constant. In this paper, we first provide a review of methods which yield crude approximations to the MLE. We also review methods based on Markov chain Monte Carlo techniques for which exact MLE has become feasible. We then present a comparative simulation study of the performance of such methods of estimation based on two simulation techniques, the Gibbs sampler and the Metropolis‐Hastings algorithm, carried out for the Strauss model.  相似文献   

12.
A model is proposed to describe observed asymmetries in postwar unemployment time series data. We assume that recession periods, when unemployment increases rapidly, correspond with unobserved positive shocks. The generating mechanism of these latent shocks is a censored regression model, where linear combinations of lagged explanatory variables lead to positive shocks, while otherwise shocks are equal to zero. We apply this censored latent effects autoregression to monthly US unemployment, where the positive shocks are found to be predictable using various leading indicators. The model fits the data well and its out‐of‐sample forecasts appear to improve on those from alternative models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
Issuing employee stock options (ESOs) transfers equity claims from current stockholders to employees, and thereby dilutes existing shareholder interests. Because employees are motivated to exert additional effort toward better performance, the value of transferred ownership claims proxied by ESO expense represents a cost of generating firm value. There are several econometric issues, most notably including the fact that the disclosed ESO expense is an endogenous variable. Without controlling for the simultaneity problem, inferences based on results from OLS analyses may be misleading. More importantly, a considerable amount of ESO expense data is censored at zero. Such a censoring problem can make the population distribution severely skewed, resulting in estimation bias. Therefore, we need to take into account the censored data issue. No prior studies have considered these two issues simultaneously. Failure to control for both censoring problem and endogeneity could explain the inconsistent results documented in prior studies. In this paper, we use the two-stage quantile regression (QR) proposed by Amemiya (1982) and Powell (1983) to examine possible nonlinear relationships, especially whether conditionally higher-stock price (or better performing) firms show a stronger negative pricing effect of ESO expense (that is, the relation between ESO expense and share price) than conditionally lower-share price firms. Our results suggest that the linear regression model greatly underestimates this negative pricing effect at higher quantiles, so the nonlinear relationship is obscure when using the standard linear model. We also consider alternative interpretations as to why heterogeneity exists in the pricing effect of ESO expense and assess whether our results concur with these explanations.  相似文献   

14.
This paper presents a model for the heterogeneity and dynamics of the conditional mean and conditional variance of individual wages. A bias‐corrected likelihood approach, which reduces the estimation bias to a term of order 1/T2, is used for estimation and inference. The small‐sample performance of the proposed estimator is investigated in a Monte Carlo study. The simulation results show that the bias of the maximum likelihood estimator is substantially corrected for designs calibrated to the data used in the empirical analysis, drawn from the PSID. The empirical results show that it is important to account for individual unobserved heterogeneity and dynamics in the variance, and that the latter is driven by job mobility. The model also explains the non‐normality observed in log‐wage data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Summary We present a class of tests for exponentiality against IFRA alternatives. The class of tests of Deshpande (1983) is a subclass of ours. We also treat the same problem when the data is randomly censored from the right. The results of an asymptotic relative efficiency comparison indicate the superiority of our tests. This research was supported by an NSERC Canada operating grant at the University of Alberta.  相似文献   

16.
We consider kernel smoothed Grenander‐type estimators for a monotone hazard rate and a monotone density in the presence of randomly right censored data. We show that they converge at rate n2/5 and that the limit distribution at a fixed point is Gaussian with explicitly given mean and variance. It is well known that standard kernel smoothing leads to inconsistency problems at the boundary points. It turns out that, also by using a boundary correction, we can only establish uniform consistency on intervals that stay away from the end point of the support (although we can go arbitrarily close to the right boundary).  相似文献   

17.
We consider estimation and testing of linkage equilibrium from genotypic data on a random sample of sibs, such as monozygotic and dizygotic twins. We compute the maximum likelihood estimator with an EM‐algorithm and a likelihood ratio statistic that takes the family structure into account. As we are interested in applying this to twin data we also allow observations on single children, so that monozygotic twins can be included. We allow non‐zero recombination fraction between the loci of interest, so that linkage disequilibrium between both linked and unlinked loci can be tested. The EM‐algorithm for computing the maximum likelihood estimator of the haplotype frequencies and the likelihood ratio test‐statistic, are described in detail. It is shown that the usual estimators of haplotype frequencies based on ignoring that the sibs are related are inefficient, and the likelihood ratio test for testing that the loci are in linkage disequilibrium.  相似文献   

18.
We review generalized dynamic models for time series of count data. Usually temporal counts are modelled as following a Poisson distribution, and a transformation of the mean depends on parameters which evolve smoothly with time. We generalize the usual dynamic Poisson model by considering continuous mixtures of the Poisson distribution. We consider Poisson‐gamma and Poisson‐log‐normal mixture models. These models have a parameter for each time t which captures possible extra‐variation present in the data. If the time interval between observations is short, many observed zeros might result. We also propose zero inflated versions of the models mentioned above. In epidemiology, when a count is equal to zero, one does not know if the disease is present or not. Our model has a parameter which provides the probability of presence of the disease given no cases were observed. We rely on the Bayesian paradigm to obtain estimates of the parameters of interest, and discuss numerical methods to obtain samples from the resultant posterior distribution. We fit the proposed models to artificial data sets and also to a weekly time series of registered number of cases of dengue fever in a district of the city of Rio de Janeiro, Brazil, during 2001 and 2002.  相似文献   

19.
We consider a utility‐consistent static labor supply model with flexible preferences and a nonlinear and possibly non‐convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
This paper proposes an alternative to maximum likelihood estimation of the parameters of the censored regression (or censored ‘Tobit’) model. The proposed estimator is a generalization of least absolute deviations estimation for the standard linear model, and, unlike estimation methods based on the assumption of normally distributed error terms, the estimator is consistent and asymptotically normal for a wide class of error distributions, and is also robust to heteroscedasticity. The paper gives the regularity conditions and proofs of these large-sample results, and proposes classes of consistent estimators of the asymptotic covariance matrix for both homoscedastic and heteroscedastic disturbances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号