首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

3.
This paper concerns estimating parameters in a high-dimensional dynamic factor model by the method of maximum likelihood. To accommodate missing data in the analysis, we propose a new model representation for the dynamic factor model. It allows the Kalman filter and related smoothing methods to evaluate the likelihood function and to produce optimal factor estimates in a computationally efficient way when missing data is present. The implementation details of our methods for signal extraction and maximum likelihood estimation are discussed. The computational gains of the new devices are presented based on simulated data sets with varying numbers of missing entries.  相似文献   

4.
Phylogenetic trees are types of networks that describe the temporal relationship between individuals, species, or other units that are subject to evolutionary diversification. Many phylogenetic trees are constructed from molecular data that is often only available for extant species, and hence they lack all or some of the branches that did not make it into the present. This feature makes inference on the diversification process challenging. For relatively simple diversification models, analytical or numerical methods to compute the likelihood exist, but these do not work for more realistic models in which the likelihood depends on properties of the missing lineages. In this article, we study a general class of species diversification models, and we provide an expectation-maximization framework in combination with a uniform sampling scheme to perform maximum likelihood estimation of the parameters of the diversification process.  相似文献   

5.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

6.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

7.
This paper proposes a new method for estimating a structural model of labour supply in which hours of work depend on (log) wages and the wage rate is considered endogenous. The main innovation with respect to other related estimation procedures is that a nonparametric additive structure in the hours of work equation is permitted. Though the focus of the paper is on this particular application, a three‐step methodology for estimating models in the presence of the above econometric problems is described. In the first step the reduced form parameters of the participation equation are estimated by a maximum likelihood procedure adapted for estimation of an additive nonparametric function. In the second step the structural parameters of the wage equation are estimated after obtaining the selection‐corrected conditional mean function. Finally, in the third step the structural parameters of the labour supply equation are estimated using local maximum likelihood estimation techniques. The paper concludes with an application to illustrate the feasibility, performance and possible gain of using this method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
We compare four different estimation methods for the coefficients of a linear structural equation with instrumental variables. As the classical methods we consider the limited information maximum likelihood (LIML) estimator and the two-stage least squares (TSLS) estimator, and as the semi-parametric estimation methods we consider the maximum empirical likelihood (MEL) estimator and the generalized method of moments (GMM) (or the estimating equation) estimator. Tables and figures of the distribution functions of four estimators are given for enough values of the parameters to cover most linear models of interest and we include some heteroscedastic cases and nonlinear cases. We have found that the LIML estimator has good performance in terms of the bounded loss functions and probabilities when the number of instruments is large, that is, the micro-econometric models with “many instruments” in the terminology of recent econometric literature.  相似文献   

9.
In a seminal paper, Mak, Journal of the Royal Statistical Society B, 55, 1993, 945, derived an efficient algorithm for solving non‐linear unbiased estimation equations. In this paper, we show that when Mak's algorithm is applied to biased estimation equations, it results in the estimates that would come from solving a bias‐corrected estimation equation, making it a consistent estimator if regularity conditions hold. In addition, the properties that Mak established for his algorithm also apply in the case of biased estimation equations but for estimates from the bias‐corrected equations. The marginal likelihood estimator is obtained when the approach is applied to both maximum likelihood and least squares estimation of the covariance matrix parameters in the general linear regression model. The new approach results in two new estimators when applied to the profile and marginal likelihood functions for estimating the lagged dependent variable coefficient in the dynamic linear regression model. Monte Carlo simulation results show the new approach leads to a better estimator when applied to the standard profile likelihood. It is therefore recommended for situations in which standard estimators are known to be biased.  相似文献   

10.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

11.
We develop new procedures for maximum likelihood estimation of affine term structure models with spanned or unspanned stochastic volatility. Our approach uses linear regression to reduce the dimension of the numerical optimization problem yet it produces the same estimator as maximizing the likelihood. It improves the numerical behavior of estimation by eliminating parameters from the objective function that cause problems for conventional methods. We find that spanned models capture the cross-section of yields well but not volatility while unspanned models fit volatility at the expense of fitting the cross-section.  相似文献   

12.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

13.
An estimation procedure will be presented for a class of threshold models for ordinal data. These models may include both fixed and random effects with associated components of variance on an underlying scale. The residual error distribution on the underlying scale may be rendered greater flexibility by introducing additional shape parameters, e.g. a kurtosis parameter or parameters to model heterogeneous residual variances as a function of factors and covariates. The estimation procedure is an extension of an iterative re-weighted restricted maximum likelihood procedure, originally developed for generalized linear mixed models. This procedure will be illustrated with a practical problem involving damage to potato tubers and with data from animal breeding and medical research from the literature.  相似文献   

14.
This paper proposes a new approach to handle nonparametric stochastic frontier (SF) models. It is based on local maximum likelihood techniques. The model is presented as encompassing some anchorage parametric model in a nonparametric way. First, we derive asymptotic properties of the estimator for the general case (local linear approximations). Then the results are tailored to a SF model where the convoluted error term (efficiency plus noise) is the sum of a half normal and a normal random variable. The parametric anchorage model is a linear production function with a homoscedastic error term. The local approximation is linear for both the production function and the parameters of the error terms. The performance of our estimator is then established in finite samples using simulated data sets as well as with a cross-sectional data on US commercial banks. The methods appear to be robust, numerically stable and particularly useful for investigating a production process and the derived efficiency scores.  相似文献   

15.
This paper considers estimation of censored panel‐data models with individual‐specific slope heterogeneity. The slope heterogeneity may be random (random slopes model) or related to covariates (correlated random slopes model). Maximum likelihood and censored least‐absolute deviations estimators are proposed for both models. The estimators are simple to implement and, in the case of maximum likelihood, lead to straightforward estimation of partial effects. The rescaled bootstrap suggested by Andrews (Econometrica 2000; 68 : 399–405) is used to deal with the possibility of variance parameters being equal to zero. The methodology is applied to an empirical study of Dutch household portfolio choice, where the outcome variable (portfolio share in safe assets) has corner solutions at zero and one. As predicted by economic theory, there is strong evidence of correlated random slopes for the age profiles, indicating a heterogeneous age profile of portfolio adjustment that varies significantly with other household characteristics. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Assessing regional population compositions is an important task in many research fields. Small area estimation with generalized linear mixed models marks a powerful tool for this purpose. However, the method has limitations in practice. When the data are subject to measurement errors, small area models produce inefficient or biased results since they cannot account for data uncertainty. This is particularly problematic for composition prediction, since generalized linear mixed models often rely on approximate likelihood inference. Obtained predictions are not reliable. We propose a robust multivariate Fay–Herriot model to solve these issues. It combines compositional data analysis with robust optimization theory. The nonlinear estimation of compositions is restated as a linear problem through isometric logratio transformations. Robust model parameter estimation is performed via penalized maximum likelihood. A robust best predictor is derived. Simulations are conducted to demonstrate the effectiveness of the approach. An application to alcohol consumption in Germany is provided.  相似文献   

17.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

18.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

19.
A new class of forecasting models is proposed that extends the realized GARCH class of models through the inclusion of option prices to forecast the variance of asset returns. The VIX is used to approximate option prices, resulting in a set of cross-equation restrictions on the model’s parameters. The full model is characterized by a nonlinear system of three equations containing asset returns, the realized variance, and the VIX, with estimation of the parameters based on maximum likelihood methods. The forecasting properties of the new class of forecasting models, as well as a number of special cases, are investigated and applied to forecasting the daily S&P500 index realized variance using intra-day and daily data from September 2001 to November 2017. The forecasting results provide strong support for including the realized variance and the VIX to improve variance forecasts, with linear conditional variance models performing well for short-term one-day-ahead forecasts, whereas log-linear conditional variance models tend to perform better for intermediate five-day-ahead forecasts.  相似文献   

20.
Here we consider the record data from the two-parameter of bathtub-shaped distribution. First, we develop simplified forms for the single moments, variances and covariance of records. These distributional properties are quite useful in obtaining the best linear unbiased estimators of the location and scale parameters which can be included in the model. The estimation of the unknown shape parameters and prediction of the future unobserved records based on some observed ones are discussed. Frequentist and Bayesian analyses are adopted for conducting the estimation and prediction problems. The likelihood method, moment based method, bootstrap methods as well as the Bayesian sampling techniques are applied for the inference problems. The point predictors and credible intervals of future record values based on an informative set of records can be developed. Monte Carlo simulations are performed to compare the so developed methods and one real data set is analyzed for illustrative purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号