首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hedonic methods are a prominent approach in the construction of quality‐adjusted price indexes. This paper shows that the process of computing such indexes is substantially simplified if arithmetic (geometric) price indexes are computed based on exponential (log‐linear) hedonic functions estimated by the Poisson pseudo‐maximum likelihood (ordinary least squares) method. A Monte Carlo simulation study based on housing data illustrates the convenience of the links identified and the very attractive properties of the Poisson estimator in the hedonic framework.  相似文献   

2.
Typically, a Poisson model is assumed for count data. In many cases, there are many zeros in the dependent variable, thus the mean is not equal to the variance value of the dependent variable. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, we suggest using a hurdle‐generalized Poisson regression model. Furthermore, the response variable in such cases is censored for some values because of some big values. A censored hurdle‐generalized Poisson regression model is introduced on count data with many zeros in this paper. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness‐of‐fit for the regression model is examined. An example and a simulation will be used to illustrate the effects of right censoring on the parameter estimation and their standard errors.  相似文献   

3.
The past forty years have seen a great deal of research into the construction and properties of nonparametric estimates of smooth functions. This research has focused primarily on two sides of the smoothing problem: nonparametric regression and density estimation. Theoretical results for these two situations are similar, and multivariate density estimation was an early justification for the Nadaraya-Watson kernel regression estimator.
A third, less well-explored, strand of applications of smoothing is to the estimation of probabilities in categorical data. In this paper the position of categorical data smoothing as a bridge between nonparametric regression and density estimation is explored. Nonparametric regression provides a paradigm for the construction of effective categorical smoothing estimates, and use of an appropriate likelihood function yields cell probability estimates with many desirable properties. Such estimates can be used to construct regression estimates when one or more of the categorical variables are viewed as response variables. They also lead naturally to the construction of well-behaved density estimates using local or penalized likelihood estimation, which can then be used in a regression context. Several real data sets are used to illustrate these points.  相似文献   

4.
For Poisson inverse Gaussian regression models, it is very complicated to obtain the influence measures based on the traditional method, because the associated likelihood function involves intractable expressions, such as the modified Bessel function. In this paper, the EM algorithm is employed as a basis to derive diagnostic measures for the models by treating them as a mixed Poisson regression with the weights from the inverse Gaussian distributions. Several diagnostic measures are obtained in both case-deletion model and local influence analysis, based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. Two numerical examples are given to illustrate the results.  相似文献   

5.
We introduce several new sports team rating models based on the gradient descent algorithm. More precisely, the models can be formulated by maximising the likelihood of match results observed using a single step of this optimisation heuristic. The proposed framework is inspired by the prominent Elo rating system, and yields an iterative version of ordinal logistic regression, as well as different variants of Poisson regression-based models. This construction makes the update equations easy to interpret, and adjusts ratings once new match results are observed. Thus, it naturally handles temporal changes in team strength. Moreover, a study of association football data indicates that the new models yield more accurate forecasts and are less computationally demanding than corresponding methods that jointly optimise the likelihood for the whole set of matches.  相似文献   

6.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

7.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

8.
Capture–Recapture methods aim to estimate the size of an elusive target population. Each member of the target population carries a count of identifications by some identifying mechanism—the number of times it has been identified during the observational period. Only positive counts are observed and inference needs to be based on the observed count distribution. A widely used assumption for the count distribution is a Poisson mixture. If the mixing distribution can be described by an exponential density, the geometric distribution arises as the marginal. This note discusses population size estimation on the basis of the zero-truncated geometric (a geometric again itself). In addition, population heterogeneity is considered for the geometric. Chao’s estimator is developed for the mixture of geometric distributions and provides a lower bound estimator which is valid under arbitrary mixing on the parameter of the geometric. However, Chao’s estimator is also known for its relatively large variance (if compared to the maximum likelihood estimator). Another estimator based on a censored geometric likelihood is suggested which uses the entire sample information but is less affected by model misspecifications. Simulation studies illustrate that the proposed censored estimator comprises a good compromise between the maximum likelihood estimator and Chao’s estimator, e.g. between efficiency and bias.  相似文献   

9.
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

10.
This paper proposes a new approach to handle nonparametric stochastic frontier (SF) models. It is based on local maximum likelihood techniques. The model is presented as encompassing some anchorage parametric model in a nonparametric way. First, we derive asymptotic properties of the estimator for the general case (local linear approximations). Then the results are tailored to a SF model where the convoluted error term (efficiency plus noise) is the sum of a half normal and a normal random variable. The parametric anchorage model is a linear production function with a homoscedastic error term. The local approximation is linear for both the production function and the parameters of the error terms. The performance of our estimator is then established in finite samples using simulated data sets as well as with a cross-sectional data on US commercial banks. The methods appear to be robust, numerically stable and particularly useful for investigating a production process and the derived efficiency scores.  相似文献   

11.
The truncated Poisson regression model is used to arrive at point and interval estimates of the size of two offender populations, i.e. drunk drivers and persons who illegally possess firearms. The dependent capture–recapture variables are constructed from Dutch police records and are counts of individual arrests for both violations. The population size estimates are derived assuming that each count is a realization of a Poisson distribution, and that the Poisson parameters are related to covariates through the truncated Poisson regression model. These assumptions are discussed in detail, and the tenability of the second assumption is assessed by evaluating the marginal residuals and performing tests on overdispersion. For the firearms example, the second assumption seems to hold well, but for the drunk drivers example there is some overdispersion. It is concluded that the method is useful, provided it is used with care.  相似文献   

12.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

13.
Mixture regression models have been widely used in business, marketing and social sciences to model mixed regression relationships arising from a clustered and thus heterogeneous population. The unknown mixture regression parameters are usually estimated by maximum likelihood estimators using the expectation–maximisation algorithm based on the normality assumption of component error density. However, it is well known that the normality-based maximum likelihood estimation is very sensitive to outliers or heavy-tailed error distributions. This paper aims to give a selective overview of the recently proposed robust mixture regression methods and compare their performance using simulation studies.  相似文献   

14.
This paper investigates the limiting behaviour of the ‘maximum likelihood estimator’(MLE) based on normality, as well as the nonlinear two-stage least squares estimator (NL2S), for the i.i.d. and regression models in which the Box-Cox transformation is applied to the dependent variable. Since the transformed variable cannot in general be normally distributed, the untransformed variable is assumed to have a two-parameter gamma distribution. Tables of probability limits and asymptotic variance demonstrate that, in this case, the inconsistency of the ‘normal MLE’ is often quite pronounced, while the NL2S is consistent and typically well behaved.  相似文献   

15.
Sonja Kuhnt 《Metrika》2010,71(3):281-294
Loglinear Poisson models are commonly used to analyse contingency tables. So far, robustness of parameter estimators as well as outlier detection have rarely been treated in this context. We start with finite-sample breakdown points. We yield that the breakdown point of mean value estimators determines a lower bound for a masking breakdown point of a class of one-step outlier identification rules. Within a more refined breakdown approach, which takes account of the structure of the contingency table, a stochastic breakdown function is defined. It returns the probability that a given proportion of outliers is randomly placed at such a pattern, where breakdown is possible. Finally, the introduced breakdown concepts are applied to characterise the maximum likelihood estimator and a median-polish estimator.  相似文献   

16.
This paper studies likelihood-based estimation and inference in parametric discontinuous threshold regression models with i.i.d. data. The setup allows heteroskedasticity and threshold effects in both mean and variance. By interpreting the threshold point as a “middle” boundary of the threshold variable, we find that the Bayes estimator is asymptotically efficient among all estimators in the locally asymptotically minimax sense. In particular, the Bayes estimator of the threshold point is asymptotically strictly more efficient than the left-endpoint maximum likelihood estimator and the newly proposed middle-point maximum likelihood estimator. Algorithms are developed to calculate asymptotic distributions and risk for the estimators of the threshold point. The posterior interval is proved to be an asymptotically valid confidence interval and is attractive in both length and coverage in finite samples.  相似文献   

17.
The model misspecification effects on the maximum likelihood estimator are studied when a biased sample is treated as a random one as well as when a random sample is treated as a biased one. The relation between the existence of a consistent estimator under model misspecification and the completeness of the distribution is also considered. The cases of the weight invariant distribution and the scale parameter distribution are examined and finally an example is presented to illustrate the results.  相似文献   

18.
Estimation and testing for a Poisson autoregressive model   总被引:1,自引:1,他引:0  
Fukang Zhu  Dehui Wang 《Metrika》2011,73(2):211-230
This article considers statistical inference for a Poisson autoregressive model. A condition for ergodicity and a necessary and sufficient condition for the existence of moments are given. Asymptotics for maximum likelihood estimator and weighted least squares estimators with estimated weights or known weights of the parameters are established. Testing conditional heteroscedasticity and testing the parameters under a simple ordered restriction are noted. A simulation study is also given.  相似文献   

19.
Chi-Chung Wen 《Metrika》2010,72(2):199-217
This paper studies semiparametric maximum likelihood estimators in the Cox proportional hazards model with covariate error, assuming that the conditional distribution of the true covariate given the surrogate is known. We show that the estimator of the regression coefficient is asymptotically normal and efficient, its covariance matrix can be estimated consistently by differentiation of the profile likelihood, and the likelihood ratio test is asymptotically chi-squared. We also provide efficient algorithms for the computations of the semiparametric maximum likelihood estimate and the profile likelihood. The performance of this method is successfully demonstrated in simulation studies.  相似文献   

20.
The generalized linear mixed model (GLMM) extends classical regression analysis to non-normal, correlated response data. Because inference for GLMMs can be computationally difficult, simplifying distributional assumptions are often made. We focus on the robustness of estimators when a main component of the model, the random effects distribution, is misspecified. Results for the maximum likelihood estimators of the Poisson inverse Gaussian model are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号