首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

2.
We introduce a new methodology to target direct transfers against poverty. Our method is based on estimation methods that focus on the poor. Using data from Tunisia, we estimate ‘focused’ transfer schemes that highly improve anti‐poverty targeting performances. Post‐transfer poverty can be substantially reduced with the new estimation method. For example, a one‐third reduction in poverty severity from proxy‐means test transfer schemes based on OLS method to focused transfer schemes requires only a few hours of computer work based on methods available on popular statistical packages. Finally, the obtained levels of undercoverage of the poor are particularly low.  相似文献   

3.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

4.
By designing remuneration schemes based on a bonus rewarding specific firm‐level outcomes, the owners/shareholders of a firm can manipulate the behavior of their managers. In practice, different bonus anchors take center stage: some are profit‐based, others use sales as the key yardstick and still different ones focus on relative performance vis‐à‐vis a peer group. In this paper, we focus on the impact of remuneration schemes on firm‐level profitability. The profit effect is investigated for (all possible combinations of) four bonus systems using delegation games. In the context of a linear Cournot model for two or three firms, we model a two‐ or three‐stage decision structure where, in the first stage (or first two stages), an owner decides on the bonus system for his manager and where, in the final stage, the manager takes the daily output decision for her firm. It appears that the bonus system based on relative (profits) performance is superior throughout. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This paper focuses on the estimation of a finite dimensional parameter in a linear model where the number of instruments is very large or infinite. In order to improve the small sample properties of standard instrumental variable (IV) estimators, we propose three modified IV estimators based on three different ways of inverting the covariance matrix of the instruments. These inverses involve a regularization or smoothing parameter. It should be stressed that no restriction on the number of instruments is needed and that all the instruments are used in the estimation. We show that the three estimators are asymptotically normal and attain the semiparametric efficiency bound. Higher-order analysis of the MSE reveals that the bias of the modified estimators does not depend on the number of instruments. Finally, we suggest a data-driven method for selecting the regularization parameter. Interestingly, our regularization techniques lead to a consistent nonparametric estimation of the optimal instrument.  相似文献   

6.
We review some results on the analysis of longitudinal data or, more generally, of repeated measures via linear mixed models starting with some exploratory statistical tools that may be employed to specify a tentative model. We follow with a summary of inferential procedures under a Gaussian set‐up and then discuss different diagnostic methods focusing on residual analysis but also addressing global and local influence. Based on the interpretation of diagnostic plots related to three types of residuals (marginal, conditional and predicted random effects) as well as on other tools, we proceed to identify remedial measures for possible violations of the proposed model assumptions, ranging from fine‐tuning of the model to the use of elliptically symmetric or skew‐elliptical linear mixed models as well as of robust estimation methods. We specify many results available in the literature in a unified notation and highlight those with greater practical appeal. In each case, we discuss the availability of model diagnostics as well as of software and give general guidelines for model selection. We conclude with analyses of three practical examples and suggest further directions for research.  相似文献   

7.
In this paper we develop new Markov chain Monte Carlo schemes for the estimation of Bayesian models. One key feature of our method, which we call the tailored randomized block Metropolis–Hastings (TaRB-MH) method, is the random clustering of the parameters at every iteration into an arbitrary number of blocks. Then each block is sequentially updated through an M–H step. Another feature is that the proposal density for each block is tailored to the location and curvature of the target density based on the output of simulated annealing, following  and  and Chib and Ergashev (in press). We also provide an extended version of our method for sampling multi-modal distributions in which at a pre-specified mode jumping iteration, a single-block proposal is generated from one of the modal regions using a mixture proposal density, and this proposal is then accepted according to an M–H probability of move. At the non-mode jumping iterations, the draws are obtained by applying the TaRB-MH algorithm. We also discuss how the approaches of Chib (1995) and Chib and Jeliazkov (2001) can be adapted to these sampling schemes for estimating the model marginal likelihood. The methods are illustrated in several problems. In the DSGE model of Smets and Wouters (2007), for example, which involves a 36-dimensional posterior distribution, we show that the autocorrelations of the sampled draws from the TaRB-MH algorithm decay to zero within 30–40 lags for most parameters. In contrast, the sampled draws from the random-walk M–H method, the algorithm that has been used to date in the context of DSGE models, exhibit significant autocorrelations even at lags 2500 and beyond. Additionally, the RW-MH does not explore the same high density regions of the posterior distribution as the TaRB-MH algorithm. Another example concerns the model of An and Schorfheide (2007) where the posterior distribution is multi-modal. While the RW-MH algorithm is unable to jump from the low modal region to the high modal region, and vice-versa, we show that the extended TaRB-MH method explores the posterior distribution globally in an efficient manner.  相似文献   

8.
空间单元大小以及其它的经济特征上的差异,常常会导致空间异方差问题。本文给出了广义空间模型异方差问题的三种不同估计方法。第一种方法是将异方差形式参数化,来克服自由度的不足,使用ML估计进行实现。而针对异方差形式未知时,分别采用了基于2SLS的迭代GMM估计和更加直接的MCMC抽样方法加以解决,特别是MCMC方法表现得更加优美。蒙特卡罗模拟表明,给定异方差形式条件下, ML估计通过异方差参数化的方法依然可以获得较好的估计效果。而异方差形式未知的情况下,另外两种方法随着样本数的增大时也可以与ML的估计结果趋于一致。  相似文献   

9.
This paper presents estimation methods and asymptotic theory for the analysis of a nonparametrically specified conditional quantile process. Two estimators based on local linear regressions are proposed. The first estimator applies simple inequality constraints while the second uses rearrangement to maintain quantile monotonicity. The bandwidth parameter is allowed to vary across quantiles to adapt to data sparsity. For inference, the paper first establishes a uniform Bahadur representation and then shows that the two estimators converge weakly to the same limiting Gaussian process. As an empirical illustration, the paper considers a dataset from Project STAR and delivers two new findings.  相似文献   

10.
The objective of this study is to extend the triple sampling methodologies to cover problems that arise in linear models. In particular, we aim to study the performance of triple sampling procedures, while controlling the risk of estimating some linear functions of means under the normality assumption. Both point as well as confidence interval estimation techniques are considered. We point out that the goal of controlling the estimation risk is reached in all cases. It is also shown analytically that triple sampling procedures have the same features of the one- by- one sequential sampling schemes.  相似文献   

11.
An efficient variant of the product and ratio estimators   总被引:1,自引:0,他引:1  
Abstract  This article presents a variant of the usual ratio and product methods of estimation, with the intention 10 improve their efficiency. The first order large sample approximations to the bias and the mean square error of the proposed estimator are obtained and compared with those of the well-known methods (simple expansion, ratio, product, difference and linear regression methods). For a special case, the accuracy of the first order approximation (terms up to the order n-1 ) is examined by including terms upto the order n-2 . With suitable choice of a design parameter, the proposed estimator turns out to be superior to the three methods mentioned first. The relation to the other two methods is examined; if the design parameter can be chosen near to the optimal value, the proposed method is seen to be approximately as efficient as the linear regression estimator. Finally some extensions are indicated.  相似文献   

12.
In this study, we consider Bayesian methods for the estimation of a sample selection model with spatially correlated disturbance terms. We design a set of Markov chain Monte Carlo algorithms based on the method of data augmentation. The natural parameterization for the covariance structure of our model involves an unidentified parameter that complicates posterior analysis. The unidentified parameter – the variance of the disturbance term in the selection equation – is handled in different ways in these algorithms to achieve identification for other parameters. The Bayesian estimator based on these algorithms can account for the selection bias and the full covariance structure implied by the spatial correlation. We illustrate the implementation of these algorithms through a simulation study and an empirical application.  相似文献   

13.
Functional data analysis is a field of growing importance in Statistics. In particular, the functional linear model with scalar response is surely the model that has attracted more attention in both theoretical and applied research. Two of the most important methodologies used to estimate the parameters of the functional linear model with scalar response are functional principal component regression and functional partial least‐squares regression. We provide an overview of estimation methods based on these methodologies and discuss their advantages and disadvantages. We emphasise that the role played by the functional principal components and by the functional partial least‐squares components that are used in estimation appears to be very important to estimate the functional slope of the model. A functional version of the best subset selection strategy usual in multiple linear regression is also analysed. Finally, we present an extensive comparative simulation study to compare the performance of all the considered methodologies that may help practitioners in the use of the functional linear model with scalar response.  相似文献   

14.
刘芳  黄忠 《基建优化》2003,24(4):26-29
从介绍固定资产投资估算的概念入手,介绍了国内外固定资产投资组成和估算的阶段划分,重点说明国外估算中指数估算法、系数估算法、单元估算法、朗格系数法和资金周转率法等,并简要介绍了最新国外估算方法投标统计表回归模型及模糊数学方法。  相似文献   

15.
Although statistical process control (SPC) techniques have been focused mostly on detecting step (constant) mean shift, drift which is a time-varying change frequently occurs in industrial applications. In this research, for monitoring drift change, the following five control schemes are compared: the exponentially weighted moving average (EWMA) chart and the cumulative sum (CUSUM) charts which are recommended detecting drift change in the literature; the generalized EWMA (GEWMA) chart proposed by Han and Tsung (2004) and two generalized likelihood ratio based schemes, GLR-S and GLR-L charts which are respectively under the assumption of step and linear trend shifts. Both the asymptotic estimation and the numerical simulation of the average run length (ARL) are presented. We show that when the in-control (IC) ARL is large (goes to infinity), the GLR-L chart has the best overall performance among the considered charts in detecting linear trend shift. From the viewpoint of practical IC ARL, based on the simulation results, we show that besides the GLR-L chart, the GEWMA chart offers a good balanced protection against drifts of different size. Some computational issues are also addressed.  相似文献   

16.
Here we consider the record data from the two-parameter of bathtub-shaped distribution. First, we develop simplified forms for the single moments, variances and covariance of records. These distributional properties are quite useful in obtaining the best linear unbiased estimators of the location and scale parameters which can be included in the model. The estimation of the unknown shape parameters and prediction of the future unobserved records based on some observed ones are discussed. Frequentist and Bayesian analyses are adopted for conducting the estimation and prediction problems. The likelihood method, moment based method, bootstrap methods as well as the Bayesian sampling techniques are applied for the inference problems. The point predictors and credible intervals of future record values based on an informative set of records can be developed. Monte Carlo simulations are performed to compare the so developed methods and one real data set is analyzed for illustrative purposes.  相似文献   

17.
In this study, we suggest pretest and shrinkage methods based on the generalised ridge regression estimation that is suitable for both multicollinear and high-dimensional problems. We review and develop theoretical results for some of the shrinkage estimators. The relative performance of the shrinkage estimators to some penalty methods is compared and assessed by both simulation and real-data analysis. We show that the suggested methods can be accounted as good competitors to regularisation techniques, by means of a mean squared error of estimation and prediction error. A thorough comparison of pretest and shrinkage estimators based on the maximum likelihood method to the penalty methods. In this paper, we extend the comparison outlined in his work using the least squares method for the generalised ridge regression.  相似文献   

18.
We propose new forecast combination schemes for predicting turning points of business cycles. The proposed combination schemes are based on the forecasting performances of a given set of models with the aim to provide better turning point predictions. In particular, we consider predictions generated by autoregressive (AR) and Markov-switching AR models, which are commonly used for business cycle analysis. In order to account for parameter uncertainty we consider a Bayesian approach for both estimation and prediction and compare, in terms of statistical accuracy, the individual models and the combined turning point predictions for the United States and the Euro area business cycles.  相似文献   

19.
We study the generalized bootstrap technique under general sampling designs. We focus mainly on bootstrap variance estimation but we also investigate the empirical properties of bootstrap confidence intervals obtained using the percentile method. Generalized bootstrap consists of randomly generating bootstrap weights so that the first two (or more) design moments of the sampling error are tracked by the corresponding bootstrap moments. Most bootstrap methods in the literature can be viewed as special cases. We discuss issues such as the choice of the distribution used to generate bootstrap weights, the choice of the number of bootstrap replicates, and the potential occurrence of negative bootstrap weights. We first describe the generalized bootstrap for the linear Horvitz‐Thompson estimator and then consider non‐linear estimators such as those defined through estimating equations. We also develop two ways of bootstrapping the generalized regression estimator of a population total. We study in greater depth the case of Poisson sampling, which is often used to select samples in Price Index surveys conducted by national statistical agencies around the world. For Poisson sampling, we consider a pseudo‐population approach and show that the resulting bootstrap weights capture the first three design moments of the sampling error. A simulation study and an example with real survey data are used to illustrate the theory.  相似文献   

20.
This paper develops formulae to compute the Fisher information matrix for the regression parameters of generalized linear models with Gaussian random effects. The Fisher information matrix relies on the estimation of the response variance under the model assumptions. We propose two approaches to estimate the response variance: the first is based on an analytic formula (or a Taylor expansion for cases where we cannot obtain the closed form), and the second is an empirical approximation using the model estimates via the expectation–maximization process. Further, simulations under several response distributions and a real data application involving a factorial experiment are presented and discussed. In terms of standard errors and coverage probabilities for model parameters, the proposed methods turn out to behave more reliably than does the ‘disparity rule’ or direct extraction of results from the generalized linear model fitted in the last expectation–maximization iteration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号