首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
In the present investigation, we propose a new method to calibrate the estimator of the general parameter of interest in survey sampling. We demonstrate that the linear regression estimator due to Hansen et al. (Sample Survey Method and Theory. Wiley, NY, 1953) is a special case of this. We reconfirm that the sum of calibrated weights has to be set equal to sum of the design weights within a given sample as shown in Singh (Advanced sampling theory with applications: How Michael ‘selected’ Amy, Vol. 1 and 2. Kluwer, The Netherlands, pp 1–1247, 2003; Proceedings of the American Statistical Association, Survey Method Section [CD-ROM], Toronto, Canada: American Statistical Association, pp 4382–4389, 2004; Metrika:1–18, 2006a; Presented at INTERFACE 2006, Pasadena, CA, USA, 2006b) and Stearns and Singh (Presented at Joint Statistical Meeting, MN, USA (Available on the CD), 2005; Comput Stat Data Anal 52:4253–4271, 2008). Thus, it shows that the Sir. R.A. Fisher’s brilliant idea of keeping sum of observed frequencies equal to that of expected frequencies leads to a “Honest-Balance” while weighing design weights in survey sampling. The major benefit of the proposed new estimator is that it always works unlike the pseudo empirical likelihood estimators listed in Owen (Empirical Likelihood. Chapman & Hall, London, 2001), Chen and Sitter (Stat Sin 9:385–406, 1999) and Wu (Sur Methodol 31(2):239–243, 2005). The main endeavor of this paper is to bring a change in the existing calibration technology, which is based on only positive distance functions, with a displacement function that has the flexibility of taking positive, negative, or zero value. At the end, the proposed technology has been compared with its competitors under several kinds of linear and non-linear non-parametric models using an extensive simulation study. A couple of open questions are raised.  相似文献   

2.
This paper considers three ratio estimators of the population mean using known correlation coefficient between the study and auxiliary variables in simple random sample when some sample observations are missing. The suggested estimators are compared with the estimators of Singh and Horn (Metrika 51:267–276, 2000), Singh and Deo (Stat Pap 44:555–579, 2003) and Kadilar and Cingi (Commun Stat Theory Methods 37:2226–2236, 2008). They are compared with other imputation estimators based on the mean or a ratio. It is found that the suggested estimators are approximately unbiased for the population mean. Also, it turns out that the suggested estimators perform well when compared with the other estimators considered in this study.  相似文献   

3.
Amitava Saha 《Metrika》2011,73(2):139-149
Eichhorn and Hayre (J Stat Plan Inference 7:307–316, 1983) introduced the scrambled response technique to gather information on sensitive quantitative variables. Singh and Joarder (Metron 15:151–157, 1997), Gupta et al. (J Stat Plan Inference 100:239–247, 2002) and Bar-Lev et al. (Metrika 60:255–260, 2004) permitted the respondents either to report their true values on the sensitive quantitative variable or the scrambled response and developed the optional randomized response (ORR) technique based on simple random sample with replacement (SRSWR). While developing the ORR procedure, these authors made the assumption that the probability of disclosing the true response or the randomized response (RR) is the same for all the individuals in a population. This is not a very realistic assumption as in practical survey situations the probability of reporting the true value or the RR generally varies from unit to unit. Moreover, if one generalizes the ORR method as developed by these authors relaxing the ‘constant probability’ assumption, the variance of an unbiased estimator for the population total or mean can not be estimated as this involves the unknown parameter, ‘the probability of revealing the true response’. Here we propose a modified ORR procedure for stratified unequal probability sampling after relaxing the assumption of ‘constant probability’ of providing the true response. It is also demonstrated with a numerical exercise that our procedure produces better estimator for a population total than that provided by the method suggested by the earlier authors.  相似文献   

4.
Let X 1, X 2, ..., X n be a random sample from a normal distribution with unknown mean μ and known variance σ 2. In many practical situations, μ is known a priori to be restricted to a bounded interval, say [−m, m] for some m > 0. The sample mean , then, becomes an inadmissible estimator for μ. It is also not minimax with respect to the squared error loss function. Minimax and other estimators for this problem have been studied by Casella and Strawderman (Ann Stat 9:870–878, 1981), Bickel (Ann Stat 9:1301–1309, 1981) and Gatsonis et al. (Stat Prob Lett 6:21–30, 1987) etc. In this paper, we obtain some new estimators for μ. The case when the variance σ 2 is unknown is also studied and various estimators for μ are proposed. Risk performance of all estimators is numerically compared for both the cases when σ 2 may be known and unknown.  相似文献   

5.
Durbin (Biometrika 48:41–55, 1961) proposed a method called random substitution, by which a composite problem of goodness-of-fit can be reduced to a simple one. In this paper we provide a method of finding the p-value of any test statistic, for a composite goodness-of-fit problem, based on the simulation of a large number of conditional samples, using an analog of Durbin’s proposal in a reverse-type application. We analyze a Bayesian chi-square test proposed in Johnson (Ann Stat 32:2361–2384, 2004) which relies on a single randomization and relate it with Durbin’s original method. We also review a related proposal for conditional Monte-Carlo simulation in Lindqvist and Taraldsen (Biometrika 92:451–464, 2005) and compare it with our procedure. We show our method in a non-group example introduced in Lindqvist and Taraldsen (Biometrika 90:489–490, 2003).  相似文献   

6.
Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering ‘randomized response’ (RR) technique (RRT), useful in unbiasedly estimating the proportion of people bearing a sensitive characteristic, is based exclusively on simple random sampling (SRS) with replacement (WR). Numerous developments that follow from it also use SRSWR and employ in estimation the mean of the RR’s yielded by each person everytime he/she is drawn in the sample. We examine how the accuracy in estimation alters when instead we use the mean of the RR’s from each distinct person sampled and also alternatively if the Horvitz and Thompson’s (HT, J Am Stat Assoc 47:663–685, 1952) method is employed for an SRSWR eliciting only, a single RR from each. Arnab’s (1999) approach of using repeated RR’s from each distinct person sampled is not taken up here to avoid algebraic complexity. Pathak’s (Sánkhya 24:287–302, 1962) results for direct response (DR) from SRSWR are modified with the use of such RR’s. Through simulations we present relative levels in accuracy in estimation by the above three alternative methods.  相似文献   

7.
Hoadley (Ann Math Stat 42:1977–1991, 1971) studied the weak law of large numbers for independent and non-identically distributed random variables. Using that result along with the missing information principle, we establish the consistency and asymptotic normality of maximum likelihood estimators based on progressively Type-II censored samples.  相似文献   

8.
We consider the problem of component-wise estimation of ordered scale parameters of two gamma populations, when it is known apriori which population corresponds to each ordered parameter. Under the scale equivariant squared error loss function, smooth estimators that improve upon the best scale equivariant estimators are derived. These smooth estimators are shown to be generalized Bayes with respect to a non-informative prior. Finally, using Monte Carlo simulations, these improved smooth estimators are compared with the best scale equivariant estimators, their non-smooth improvements obtained in Vijayasree, Misra & Singh (1995), and the restricted maximum likelihood estimators. Acknowledgments. Authors are thankful to a referee for suggestions leading to improved presentation.  相似文献   

9.
In this paper, we study a robust and efficient estimation procedure for the order of finite mixture models based on the minimizing a penalized density power divergence estimator. For this task, we use the locally conic parametrization approach developed by Dacunha-Castelle and Gassiate (ESAIM Probab Stat 285–317, 1997a; Ann Stat 27:1178–1209, 1999), and verify that the minimizing a penalized density power divergence estimator is consistent. Simulation results are provided for illustration.  相似文献   

10.
In this study, three different estimators for estimating the proportion of a sensitive attribute in survey sampling are compared at equal protection of the respondents. The three estimators considered are due to Odumade and Singh (2009, Comm. Statist. Theory Methods) , Singh and Sedory (2011, Sociological Methods and Research) and a new estimator obtained by minimizing a chi‐squared distance. A SAS Macro is developed to compare these three estimators using a simulation study at equal protection of the respondents. A set of data from a real face‐to‐face interview was collected using two decks of cards and has been analyzed. The results are discussed.  相似文献   

11.
In two recent papers by Balakrishnan et al. (J Qual Technol 39:35–47, 2007; Ann Inst Stat Math 61:251–274, 2009), the maximum likelihood estimators [^(q)]1{\hat{\theta}_{1}} and [^(q)]2{\hat{\theta}_{2}} of the parameters θ 1 and θ 2 have been derived in the framework of exponential simple step-stress models under Type-II and Type-I censoring, respectively. Here, we prove that these estimators are stochastically monotone with respect to θ 1 and θ 2, respectively, which has been conjectured in these papers and then utilized to develop exact conditional inference for the parameters θ 1 and θ 2. For proving these results, we have established a multivariate stochastic ordering of a particular family of trinomial distributions under truncation, which is also of independent interest.  相似文献   

12.
We construct a nonparametric sequential test for the ruin probability and a corresponding change-point test in a risk model perturbed by diffusion. Some limiting properties are derived, which extend and improve on recent results of Conti (Stat Prob Lett 72:333–343, 2005) and Jahnke (Diploma thesis, University of Cologne, 2007). It is shown that the monitoring procedures can be designed such that the tests have an asymptotic prescribed false alarm rate (size) α and power 1. Some results from a small simulation study are also presented.  相似文献   

13.
The Asymptotics of MM-Estimators for Linear Regression with Fixed Designs   总被引:1,自引:0,他引:1  
MM-estimators achieve simultaneous high efficiency and high breakdown point over contamination neighborhoods. Inference based on these estimators relies on their asymptotic properties, which have been studied for the case of random covariates. In this paper we show that, under relatively mild regularity conditions, MM-estimators for linear regression models are strongly consistent when the design is fixed. Moreover, their strong consistency allows us to show that these estimators are also asymptotically normal for non-random covariates. These results justify the use of a normal approximation to the finite-sample distribution of MM-estimators for linear regression with fixed explanatory variables. Additionally, these results have been used to extend the robust bootstrap (Salibian-Barrera and Zamar in Ann Stat 30:556–582, 2002) to the case of fixed designs [see Salibian-Barrera 2004, submitted].Research supported by an NSERC Research Grant (Individual)  相似文献   

14.
Sequential methods have been used for many applications; especially, when fixed sample procedures are not possible and/or when “early stopping” of sampling is beneficial for applications. At the same time, the issue of how to make correct inferences when measurement errors are present has drawn considerable attention from statisticians. In this paper, the problems of sequential estimation of generalized linear models when there are measurement errors in both adaptive and fixed design cases are studied. The proposed sequential procedure is proved to be asymptotically consistent and efficient in the sense of Chow and Robbins [Ann Math Stat 36(2):457–462, 1965] when measurement errors decay gradually as the number of sequentially selected design points increases. This assumption is useful in sequentially designed experiments, and can also be fulfilled in the case when replicate measurements are available. Some numerical studies based on a Rasch model and a logistic regression model are conducted to evaluate the performance of the proposed procedure.  相似文献   

15.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

16.
We consider a semiparametric method to estimate logistic regression models with missing both covariates and an outcome variable, and propose two new estimators. The first, which is based solely on the validation set, is an extension of the validation likelihood estimator of Breslow and Cain (Biometrika 75:11–20, 1988). The second is a joint conditional likelihood estimator based on the validation and non-validation data sets. Both estimators are semiparametric as they do not require any model assumptions regarding the missing data mechanism nor the specification of the conditional distribution of the missing covariates given the observed covariates. The asymptotic distribution theory is developed under the assumption that all covariate variables are categorical. The finite-sample properties of the proposed estimators are investigated through simulation studies showing that the joint conditional likelihood estimator is the most efficient. A cable TV survey data set from Taiwan is used to illustrate the practical use of the proposed methodology.  相似文献   

17.
In the present investigation, a new forced quantitative randomized response (FQRR) model has been proposed. Both situations when the values of the forced quantitative response are known and unknown are studied. The forced qualitative randomized response models due to Liu and Chow (J Am Stat Assoc 71:72–73, 1976a, Biometrics 32:607–618, 1976b) and Stem and Steinhorst (J Am Stat Assoc 79:555–564, 1984) are shown as a special case of the situation when the value of the forced quantitative randomized response is simply replaced by a forced “yes” response. The proposed FQRR model remains more efficient than the recent Bar-Lev et al. (Metrika, 60:255–260, 2004), say BBB model. The relative efficiency of the proposed FQRR model with respect to the existing competitors, like the BBB model, has been investigated under different situations. No doubt the present model will lead to several new developments in the field of randomized response sampling. The proposed FQRR model will encourage researchers/scientists to think more on these lines.  相似文献   

18.
Faraz and Parsian (Stat Pap 47:569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling intervals (VSSVSI). In this paper, the DWL T2 control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim (J Appl Stat 28:875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling (VRS) schemes are made to choose the best option economically.  相似文献   

19.
In this paper, we implement the conditional difference asymmetry model (CDAS) for square tables with nominal categories proposed by Tomizawa et al. (J. Appl. Stat. 31(3): 271–277, 2004) with the use of the non-standard log-linear model formulation approach. The implementation is carried out by refitting the model in the 3 ×  3 table in (Tomizawa et al. J. Appl. Stat. 31(3): 271–277, 2004). We extend this approach to a larger 4 ×  4 table of religious affiliation. We further calculated the measure of asymmetry along with its asymptotic standard error and confidence bounds. The procedure is implemted with SAS PROC GENMOD but can also be implemented in SPSS by following the discussion in (Lawal, J. Appl. Stat. 31(3): 279–303, 2004; Lawal, Qual. Quant. 38(3): 259–289, 2004).  相似文献   

20.
In frontier analysis, most nonparametric approaches (DEA, FDH) are based on envelopment ideas which assume that with probability one, all observed units belong to the attainable set. In these “deterministic” frontier models, statistical inference is now possible, by using bootstrap procedures. In the presence of noise, envelopment estimators could behave dramatically since they are very sensitive to extreme observations that might result only from noise. DEA/FDH techniques would provide estimators with an error of the order of the standard deviation of the noise. This paper adapts some recent results on detecting change points [Hall P, Simar L (2002) J Am Stat Assoc 97:523–534] to improve the performances of the classical DEA/FDH estimators in the presence of noise. We show by simulated examples that the procedure works well, and better than the standard DEA/FDH estimators, when the noise is of moderate size in term of signal to noise ratio. It turns out that the procedure is also robust to outliers. The paper can be seen as a first attempt to formalize stochastic DEA/FDH estimators.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号