首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
We consider the problem of estimating a probability density function based on data that are corrupted by noise from a uniform distribution. The (nonparametric) maximum likelihood estimator for the corresponding distribution function is well defined. For the density function this is not the case. We study two nonparametric estimators for this density. The first is a type of kernel density estimate based on the empirical distribution function of the observable data. The second is a kernel density estimate based on the MLE of the distribution function of the unobservable (uncorrupted) data.  相似文献   

2.
Recent development of intensity estimation for inhomogeneous spatial point processes with covariates suggests that kerneling in the covariate space is a competitive intensity estimation method for inhomogeneous Poisson processes. It is not known whether this advantageous performance is still valid when the points interact. In the simplest common case, this happens, for example, when the objects presented as points have a spatial dimension. In this paper, kerneling in the covariate space is extended to Gibbs processes with covariates‐dependent chemical activity and inhibitive interactions, and the performance of the approach is studied through extensive simulation experiments. It is demonstrated that under mild assumptions on the dependence of the intensity on covariates, this approach can provide better results than the classical nonparametric method based on local smoothing in the spatial domain. In comparison with the parametric pseudo‐likelihood estimation, the nonparametric approach can be more accurate particularly when the dependence on covariates is weak or if there is uncertainty about the model or about the range of interactions. An important supplementary task is the dimension reduction of the covariate space. It is shown that the techniques based on the inverse regression, previously applied to Cox processes, are useful even when the interactions are present. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

3.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

4.
The relevance-weighted likelihood function weights individual contributions to the likelihood according to their relevance for the inferential problem of interest. Consistency and asymptotic normality of the weighted maximum likelihood estimator were previously proved for independent sequences of random variables. We extend these results to apply to dependent sequences, and, in so doing, provide a unified approach to a number of diverse problems in dependent data. In particular, we provide a heretofore unknown approach for dealing with heterogeneity in adaptive designs, and unify the smoothing approach that appears in many foundational papers for independent data. Applications are given in clinical trials, psychophysics experiments, time series models, transition models, and nonparametric regression. Received: April 2000  相似文献   

5.
While jackknife and bootstrap estimates of the variance of a statistic are well–known, the author extends these nonparametric maximum likelihood techniques to the estimation of skewness and kurtosis. In addition to the usual negative jackknife also a positive jackknife as proposed by BERAN (1984) receives interest in this work. The performance of the methods is investigated by a Monte Carlo study for Kendall's tau in various situations likely to occur in practice. Possible applications of these developments are discussed.  相似文献   

6.
This paper uses free-knot and fixed-knot regression splines in a Bayesian context to develop methods for the nonparametric estimation of functions subject to shape constraints in models with log-concave likelihood functions. The shape constraints we consider include monotonicity, convexity and functions with a single minimum. A computationally efficient MCMC sampling algorithm is developed that converges faster than previous methods for non-Gaussian models. Simulation results indicate the monotonically constrained function estimates have good small sample properties relative to (i) unconstrained function estimates, and (ii) function estimates obtained from other constrained estimation methods when such methods exist. Also, asymptotic results show the methodology provides consistent estimates for a large class of smooth functions. Two detailed illustrations exemplify the ideas.  相似文献   

7.
It is shown how to implement an EM algorithm for maximum likelihood estimation of hierarchical nonlinear models for data sets consisting of more than two levels of nesting. This upward–downward algorithm makes use of the conditional independence assumptions implied by the hierarchical model. It cannot only be used for the estimation of models with a parametric specification of the random effects, but also to extend the two-level nonparametric approach – sometimes referred to as latent class regression – to three or more levels. The proposed approach is illustrated with an empirical application.  相似文献   

8.
Abstract. Developments in the vast and growing literatures on nonparametric and semiparametric statistical estimation are reviewed. The emphasis is on useful methodology rather than statistical properties for their own sake. Some empirical applications to economic data are described. The paper deals separately with nonparametric density estimation, nonparametric regression estimation, and estimation of semiparametric models.  相似文献   

9.
Central limit theorems are developed for instrumental variables estimates of linear and semiparametric partly linear regression models for spatial data. General forms of spatial dependence and heterogeneity in explanatory variables and unobservable disturbances are permitted. We discuss estimation of the variance matrix, including estimates that are robust to disturbance heteroscedasticity and/or dependence. A Monte Carlo study of finite-sample performance is included. In an empirical example, the estimates and robust and non-robust standard errors are computed from Indian regional data, following tests for spatial correlation in disturbances, and nonparametric regression fitting. Some final comments discuss modifications and extensions.  相似文献   

10.
In this article, we propose a mean linear regression model where the response variable is inverse gamma distributed using a new parameterization of this distribution that is indexed by mean and precision parameters. The main advantage of our new parametrization is the straightforward interpretation of the regression coefficients in terms of the expectation of the positive response variable, as usual in the context of generalized linear models. The variance function of the proposed model has a quadratic form. The inverse gamma distribution is a member of the exponential family of distributions and has some distributions commonly used for parametric models in survival analysis as special cases. We compare the proposed model to several alternatives and illustrate its advantages and usefulness. With a generalized linear model approach that takes advantage of exponential family properties, we discuss model estimation (by maximum likelihood), black further inferential quantities and diagnostic tools. A Monte Carlo experiment is conducted to evaluate the performances of these estimators in finite samples with a discussion of the obtained results. A real application using minerals data set collected by Department of Mines of the University of Atacama, Chile, is considered to demonstrate the practical potential of the proposed model.  相似文献   

11.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

12.
We consider nonparametric/semiparametric estimation and testing of econometric models with data dependent smoothing parameters. Most of the existing works on asymptotic distributions of a nonparametric/semiparametric estimator or a test statistic are based on some deterministic smoothing parameters, while in practice it is important to use data-driven methods to select the smoothing parameters. In this paper we give a simple sufficient condition that can be used to establish the first order asymptotic equivalence of a nonparametric estimator or a test statistic with stochastic smoothing parameters to those using deterministic smoothing parameters. We also allow for general weakly dependent data.  相似文献   

13.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

14.
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample size is not large, for instance smaller than about 1000, asymptotic distribution-free estimation methods are also not applicable. This paper assumes that the observed variables are transformed to normally distributed variables. The non-normally distributed variables are transformed with a Box–Cox function. Estimation of the model parameters and the transformation parameters is done by the maximum likelihood method. Furthermore, the test statistics (i.e. standard deviations) of these parameters are derived. This makes it possible to show the importance of the transformations. Finally, an empirical example is presented.  相似文献   

15.
W.F. Sheppard has been much overlooked in the history of statistics although his work produced significant contributions. He developed a polynomial smoothing method and corrections of moment estimates for grouped data as well as extensive normal probability tables that have been widely used since the 20th century. Sheppard presented his smoothing method for actuaries in a series of publications during the early 20th century. Population data consist of irregularities, and some adjustment or smoothing of the data is often necessary. Simple techniques, such as Spencer's summation formulae involving arithmetic operations and moving averages, were commonly practised by actuaries to smooth out equally spaced data. Sheppard's method, however, is a polynomial smoothing method based on central differences. We will show how Sheppard's smoothing method was a significant milestone in the development of smoothing techniques and a precursor to local polynomial regression.  相似文献   

16.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

17.
本文为一类具有异质性非参数时间趋势的面板数据模型提出了一种简单估计方法。基于局部多项式回归的思想,首先去除数据中的时间趋势成分,然后由最小二乘法来估计公共系数,同时得到时间趋势函数的非参数估计。在一些正则条件下,研究了这些估计量的渐近性质,即在时间维度T和横截面维度n同时趋向无穷时,建立了各个估计量的渐近相合性和渐近正态性。最后通过蒙特卡洛模拟,考查了这种估计方法的有限样本性质。  相似文献   

18.
Longitudinal methods have been widely used in biomedicine and epidemiology to study the patterns of time-varying variables, such as disease progression or trends of health status. Data sets of longitudinal studies usually involve repeatedly measured outcomes and covariates on a set of randomly chosen subjects over time. An important goal of statistical analyses is to evaluate the effects of the covariates, which may or may not depend on time, on the outcomes of interest. Because fully parametric models may be subject to model misspecification and completely unstructured nonparametric models may suffer from the drawbacks of "curse of dimensionality", the varying-coefficient models are a class of structural nonparametric models which are particularly useful in longitudinal analyses. In this article, we present several important nonparametric estimation and inference methods for this class of models, demonstrate the advantages, limitations and practical implementations of these methods in different longitudinal settings, and discuss some potential directions of further research in this area. Applications of these methods are illustrated through two epidemiological examples.  相似文献   

19.
L. Nie 《Metrika》2006,63(2):123-143
Generalized linear and nonlinear mixed-effects models are used extensively in biomedical, social, and agricultural sciences. The statistical analysis of these models is based on the asymptotic properties of the maximum likelihood estimator. However, it is usually assumed that the maximum likelihood estimator is consistent, without providing a proof. A rigorous proof of the consistency by verifying conditions from existing results can be very difficult due to the integrated likelihood. In this paper, we present some easily verifiable conditions for the strong consistency of the maximum likelihood estimator in generalized linear and nonlinear mixed-effects models. Based on this result, we prove that the maximum likelihood estimator is consistent for some frequently used models such as mixed-effects logistic regression models and growth curve models.  相似文献   

20.
This paper presents a method to test for multimodality of an estimated kernel density of derivative estimates from a nonparametric regression. The test is included in a study of nonparametric growth regressions. The results show that in the estimation of unconditional β‐convergence the distribution of the partial effects is multimodal, with one mode in the negative region (primarily OECD economies) and possibly two modes in the positive region (primarily non‐OECD economies) of the estimates. The results for conditional β‐convergence show that the density is predominantly negative and there is mixed evidence that the distribution is unimodal. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号