首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An estimation procedure will be presented for a class of threshold models for ordinal data. These models may include both fixed and random effects with associated components of variance on an underlying scale. The residual error distribution on the underlying scale may be rendered greater flexibility by introducing additional shape parameters, e.g. a kurtosis parameter or parameters to model heterogeneous residual variances as a function of factors and covariates. The estimation procedure is an extension of an iterative re-weighted restricted maximum likelihood procedure, originally developed for generalized linear mixed models. This procedure will be illustrated with a practical problem involving damage to potato tubers and with data from animal breeding and medical research from the literature.  相似文献   

2.
A broad class of generalized linear mixed models, e.g. variance components models for binary data, percentages or count data, will be introduced by incorporating additional random effects into the linear predictor of a generalized linear model structure. Parameters are estimated by a combination of quasi-likelihood and iterated MINQUE (minimum norm quadratic unbiased estimation), the latter being numerically equivalent to REML (restricted, or residual, maximum likelihood). First, conditional upon the additional random effects, observations on a working variable and weights are derived by quasi-likelihood, using iteratively re-weighted least squares. Second, a linear mixed model is fitted to the working variable, employing the weights for the residual error terms, by iterated MINQUE. The latter may be regarded as a least squares procedure applied to squared and product terms of error contrasts derived from the working variable. No full distributional assumptions are needed for estimation. The model may be fitted with standardly available software for weighted regression and REML.  相似文献   

3.
Scattered reports of multiple maxima in posterior distributions or likelihoods for mixed linear models appear throughout the literature. Less scrutinised is the restricted likelihood, which is the posterior distribution for a specific prior distribution. This paper surveys existing literature and proposes a unifying framework for understanding multiple maxima. For those problems with covariance structures that are diagonalisable in a specific sense, the restricted likelihood can be viewed as a generalised linear model with gamma errors, identity link and a prior distribution on the error variance. The generalised linear model portion of the restricted likelihood can be made to conflict with the portion of the restricted likelihood that functions like a prior distribution on the error variance, giving two local maxima in the restricted likelihood. Applying in addition an explicit conjugate prior distribution to variance parameters permits a second local maximum in the marginal posterior distribution even if the likelihood contribution has a single maximum. Moreover, reparameterisation from variance to precision can change the posterior modality; the converse also is true. Modellers should beware of these potential pitfalls when selecting prior distributions or using peak‐finding algorithms to estimate parameters.  相似文献   

4.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

5.
The effective use of spatial information in a regression‐based approach to small area estimation is an important practical issue. One approach to account for geographic information is by extending the linear mixed model to allow for spatially correlated random area effects. An alternative is to include the spatial information by a non‐parametric mixed models. Another option is geographic weighted regression where the model coefficients vary spatially across the geography of interest. Although these approaches are useful for estimating small area means efficiently under strict parametric assumptions, they can be sensitive to outliers. In this paper, we propose robust extensions of the geographically weighted empirical best linear unbiased predictor. In particular, we introduce robust projective and predictive estimators under spatial non‐stationarity. Mean squared error estimation is performed by two analytic approaches that account for the spatial structure in the data. Model‐based simulations show that the methodology proposed often leads to more efficient estimators. Furthermore, the analytic mean squared error estimators introduced have appealing properties in terms of stability and bias. Finally, we demonstrate in the application that the new methodology is a good choice for producing estimates for average rent prices of apartments in urban planning areas in Berlin.  相似文献   

6.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

7.
The most popular econometric models in the panel data literature are the class of linear panel data models with unobserved individual- and/or time-specific effects. The consistency of parameter estimators and the validity of their economic interpretations as marginal effects depend crucially on the correct functional form specification of the linear panel data model. In this paper, a new class of residual-based tests is proposed for checking the validity of dynamic panel data models with both large cross-sectional units and time series dimensions. The individual and time effects can be fixed or random, and panel data can be balanced or unbalanced. The tests can detect a wide range of model misspecifications in the conditional mean of a dynamic panel data model, including functional form and lag misspecification. They check a large number of lags so that they can capture misspecification at any lag order asymptotically. No common alternative is assumed, thus allowing for heterogeneity in the degrees and directions of functional form misspecification across individuals. Thanks to the use of panel data with large N and T, the proposed nonparametric tests have an asymptotic normal distribution under the null hypothesis without requiring the smoothing parameters to grow with the sample sizes. This suggests better nonparametric asymptotic approximation for the panel data than for time series or cross sectional data. This is confirmed in a simulation study. We apply the new tests to test linear specification of cross-country growth equations and found significant nonlinearities in mean for OECD countries’ growth equation for annual and quintannual panel data.  相似文献   

8.
This paper considers the location‐scale quantile autoregression in which the location and scale parameters are subject to regime shifts. The regime changes in lower and upper tails are determined by the outcome of a latent, discrete‐state Markov process. The new method provides direct inference and estimate for different parts of a non‐stationary time series distribution. Bayesian inference for switching regimes within a quantile, via a three‐parameter asymmetric Laplace distribution, is adapted and designed for parameter estimation. Using the Bayesian output, the marginal likelihood is readily available for testing the presence and the number of regimes. The simulation study shows that the predictability of regimes and conditional quantiles by using asymmetric Laplace distribution as the likelihood is fairly comparable with the true model distributions. However, ignoring that autoregressive coefficients might be quantile dependent leads to substantial bias in both regime inference and quantile prediction. The potential of this new approach is illustrated in the empirical applications to the US inflation and real exchange rates for asymmetric dynamics and the S&P 500 index returns of different frequencies for financial market risk assessment.  相似文献   

9.
The generalized linear mixed model (GLMM) extends classical regression analysis to non-normal, correlated response data. Because inference for GLMMs can be computationally difficult, simplifying distributional assumptions are often made. We focus on the robustness of estimators when a main component of the model, the random effects distribution, is misspecified. Results for the maximum likelihood estimators of the Poisson inverse Gaussian model are presented.  相似文献   

10.
This paper presents estimation methods for dynamic nonlinear models with correlated random effects (CRE) when having unbalanced panels. Unbalancedness is often encountered in applied work and ignoring it in dynamic nonlinear models produces inconsistent estimates even if the unbalancedness process is completely at random. We show that selecting a balanced panel from the sample can produce efficiency losses or even inconsistent estimates of the average marginal effects. We allow the process that determines the unbalancedness structure of the data to be correlated with the permanent unobserved heterogeneity. We discuss how to address the estimation by maximizing the likelihood function for the whole sample and also propose a Minimum Distance approach, which is computationally simpler and asymptotically equivalent to the Maximum Likelihood estimation. Our Monte Carlo experiments and empirical illustration show that the issue is relevant. Our proposed solutions perform better both in terms of bias and RMSE than the approaches that ignore the unbalancedness or that balance the sample.  相似文献   

11.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

12.
Although each statistical unit on which measurements are taken is unique, typically there is not enough information available to account totally for its uniqueness. Therefore, heterogeneity among units has to be limited by structural assumptions. One classical approach is to use random effects models, which assume that heterogeneity can be described by distributional assumptions. However, inference may depend on the assumed mixing distribution, and it is assumed that the random effects and the observed covariates are independent. An alternative considered here is fixed effect models, which let each unit has its own parameter. They are quite flexible but suffer from the large number of parameters. The structural assumption made here is that there are clusters of units that share the same effects. It is shown how clusters can be identified by tailored regularised estimators. Moreover, it is shown that the regularised estimates compete well with estimates for the random effects model, even if the latter is the data generating model. They dominate if clusters are present.  相似文献   

13.
The model misspecification effects on the maximum likelihood estimator are studied when a biased sample is treated as a random one as well as when a random sample is treated as a biased one. The relation between the existence of a consistent estimator under model misspecification and the completeness of the distribution is also considered. The cases of the weight invariant distribution and the scale parameter distribution are examined and finally an example is presented to illustrate the results.  相似文献   

14.
This paper proposes a common and tractable framework for analyzing fixed and random effects models, in particular constant‐slope variable‐intercept designs. It is shown that, regardless of whether effects (i) are treated as parameters or as an error term, (ii) are estimated in different stages of a hierarchical model, or whether (iii) correlation between effects and regressors is allowed, when the same prior information on idiosyncratic parameters is introduced into all estimation methods, the resulting common slope estimator is also the same across methods. These results are illustrated using the Grünfeld investment data with different prior distributions. Random effects estimates are shown to be more efficient than fixed effects estimates. This efficiency gain, however, comes at the cost of neglecting information obtained in the computation of the prior unknown variance of idiosyncratic parameters.  相似文献   

15.
Model selection from several non‐nested models by using the deviance information criterion within Bayesian inference Using Gibbs Sampling (BUGS) software needs to be treated with caution. This is particularly important if one can specify a model in various mixing representations, as for the normal variance‐mean mixing distribution occurring in financial contexts. We propose a procedure to compare goodness of fit of several non‐nested models, which uses BUGS software in part.  相似文献   

16.
Recent interest in statistical inference for panel data has focused on the problem of unobservable, individual-specific, random effects and the inconsistencies they introduce in estimation when they are correlated with other exogenous variables. Analysis of this problem has always assumed the variance components to be known. In this paper, we re-examine some of these questions in finite samples when the variance components must be estimated. In particular, when the effects are uncorrelated with other explanatory variables, we show that (i) the feasible Gauss-Markov estimator is more efficient than the within groups estimator for all but the fewest degrees of freedom and its variance is never more than 17% above the Cramer-Rao bound, (ii) the asymptotic approximation to the variance of the feasible Gauss-Markov estimator is similarly within 17% of the true variance but remains significantly smaller for moderately large samples sizes, and (iii) more efficient estimators for the variance components do not necessarily yield more efficient feasible Gauss-Markov estimators.  相似文献   

17.
The analysis of unbalanced linear models with variance components   总被引:2,自引:0,他引:2  
Statistical inference for fixed effects, random effects and components of variance in an unbalanced linear model with variance components will be discussed. Variance components will be estimated by Restricted Maximum Likelihood. Iterative procedures for computing the estimates, such as Fisher scoring and the EM-algorithm, are described.  相似文献   

18.
Interval estimation is an important objective of most experimental and observational studies. Knowing at the design stage of the study how wide the confidence interval (CI) is expected to be and where its limits are expected to fall can be very informative. Asymptotic distribution of the confidence limits can also be used to answer complex questions of power analysis by computing power as probability that a CI will exclude a given parameter value. The CI‐based approach to power and methods of calculating the expected size and location of asymptotic CIs as a measure of expected precision of estimation are reviewed in the present paper. The theory is illustrated with commonly used estimators, including unadjusted risk differences, odds ratios and rate ratios, as well as more complex estimators based on multivariable linear, logistic and Cox regression models. It is noted that in applications with the non‐linear models, some care must be exercised when selecting the appropriate variance expression. In particular, the well‐known ‘short‐cut’ variance formula for the Cox model can be very inaccurate under unequal allocation of subjects to comparison groups. A more accurate expression is derived analytically and validated in simulations. Applications with ‘exact’ CIs are also considered.  相似文献   

19.
The exponentiated Weibull distribution is a convenient alternative to the generalized gamma distribution to model time-to-event data. It accommodates both monotone and nonmonotone hazard shapes, and flexible enough to describe data with wide ranging characteristics. It can also be used for regression analysis of time-to-event data. The maximum likelihood method is thus far the most widely used technique for inference, though there is a considerable body of research of improving the maximum likelihood estimators in terms of asymptotic efficiency. For example, there has recently been considerable attention on applying James–Stein shrinkage ideas to parameter estimation in regression models. We propose nonpenalty shrinkage estimation for the exponentiated Weibull regression model for time-to-event data. Comparative studies suggest that the shrinkage estimators outperform the maximum likelihood estimators in terms of statistical efficiency. Overall, the shrinkage method leads to more accurate statistical inference, a fundamental and desirable component of statistical theory.  相似文献   

20.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号