首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Amitava Saha 《Metrika》2011,73(2):139-149
Eichhorn and Hayre (J Stat Plan Inference 7:307–316, 1983) introduced the scrambled response technique to gather information on sensitive quantitative variables. Singh and Joarder (Metron 15:151–157, 1997), Gupta et al. (J Stat Plan Inference 100:239–247, 2002) and Bar-Lev et al. (Metrika 60:255–260, 2004) permitted the respondents either to report their true values on the sensitive quantitative variable or the scrambled response and developed the optional randomized response (ORR) technique based on simple random sample with replacement (SRSWR). While developing the ORR procedure, these authors made the assumption that the probability of disclosing the true response or the randomized response (RR) is the same for all the individuals in a population. This is not a very realistic assumption as in practical survey situations the probability of reporting the true value or the RR generally varies from unit to unit. Moreover, if one generalizes the ORR method as developed by these authors relaxing the ‘constant probability’ assumption, the variance of an unbiased estimator for the population total or mean can not be estimated as this involves the unknown parameter, ‘the probability of revealing the true response’. Here we propose a modified ORR procedure for stratified unequal probability sampling after relaxing the assumption of ‘constant probability’ of providing the true response. It is also demonstrated with a numerical exercise that our procedure produces better estimator for a population total than that provided by the method suggested by the earlier authors.  相似文献   

2.
In the present investigation, a new forced quantitative randomized response (FQRR) model has been proposed. Both situations when the values of the forced quantitative response are known and unknown are studied. The forced qualitative randomized response models due to Liu and Chow (J Am Stat Assoc 71:72–73, 1976a, Biometrics 32:607–618, 1976b) and Stem and Steinhorst (J Am Stat Assoc 79:555–564, 1984) are shown as a special case of the situation when the value of the forced quantitative randomized response is simply replaced by a forced “yes” response. The proposed FQRR model remains more efficient than the recent Bar-Lev et al. (Metrika, 60:255–260, 2004), say BBB model. The relative efficiency of the proposed FQRR model with respect to the existing competitors, like the BBB model, has been investigated under different situations. No doubt the present model will lead to several new developments in the field of randomized response sampling. The proposed FQRR model will encourage researchers/scientists to think more on these lines.  相似文献   

3.
Shanbhag (J Appl Probab 9:580–587, 1972; Theory Probab Appl 24:430–433, 1979) showed that the diagonality of the Bhattacharyya matrix characterizes the set of Normal, Poisson, Binomial, negative Binomial, Gamma or Meixner hypergeometric distributions. In this note, using Shanbhag (J Appl Probab 9:580–587, 1972; Theory Probab Appl 24:430–433, 1979) and Pommeret (J Multivar Anal 63:105–118, 1997) techniques, we evaluated the general form of the 5 × 5 Bhattacharyya matrix in the natural exponential family satisfying f(x|q)=\fracexp{xg(q)}b(g(q))y(x){f(x|\theta)=\frac{\exp\{xg(\theta)\}}{\beta(g(\theta))}\psi(x)} with cubic variance function (NEF-CVF) of θ. We see that the matrix is not diagonal like distribution with quadratic variance function and has off-diagonal elements. In addition, we calculate the 5 × 5 Bhattacharyya matrix for inverse Gaussian distribution and evaluated different Bhattacharyya bounds for the variance of estimator of the failure rate, coefficient of variation, mode and moment generating function due to inverse Gaussian distribution.  相似文献   

4.
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27–61, 2000; Pap Psicól Revist Col Of Psicó 29:92–106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665–683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.  相似文献   

5.
Bairamov et al. (Aust N Z J Stat 47:543–547, 2005) characterize the exponential distribution in terms of the regression of a function of a record value with its adjacent record values as covariates. We extend these results to the case of non-adjacent covariates. We also consider a more general setting involving monotone transformations. As special cases, we present characterizations involving weighted arithmetic, geometric, and harmonic means.  相似文献   

6.
Self-anchoring scales were first mentioned by Kilpatrick and Cantril (J Indiv Psychol 16:158–170, 1960) and Cantril (The pattern of human concerns, 1965) as rating instruments in which the end anchors are defined by the respondent himself, basing on his own assumptions, perceptions, goals and values. The uses of these scales are legion and they have shown to be very useful in reducing measurement bias in cross-cultural research (Cantril, The pattern of human concerns, 1965; Bernheim et al. J. Happiness Stud. 7:227–250, 2006). The first part of the current study investigates whether context effects can be lessened or eliminated by using self-anchoring scales. For this purpose, an experiment similar to the ones by Couper et al. (Public Opin Q 71:623–634, 2004, Public Opin Q 68:255–266, 2007), in which they manipulated images that figured in a web survey, was conducted. The hypothesis that self-anchoring scales can reduce contextual bias, is not supported by our data. The second part of the study investigates if and how self-anchoring scales affect drop-out during the filling-out of questionnaires. It is found that, compared to a regular rating scale, a larger proportion of respondents drop-out. Moreover, subjective preferences for the one or the other scale do not seem to differ.  相似文献   

7.
In the present investigation, we propose a new method to calibrate the estimator of the general parameter of interest in survey sampling. We demonstrate that the linear regression estimator due to Hansen et al. (Sample Survey Method and Theory. Wiley, NY, 1953) is a special case of this. We reconfirm that the sum of calibrated weights has to be set equal to sum of the design weights within a given sample as shown in Singh (Advanced sampling theory with applications: How Michael ‘selected’ Amy, Vol. 1 and 2. Kluwer, The Netherlands, pp 1–1247, 2003; Proceedings of the American Statistical Association, Survey Method Section [CD-ROM], Toronto, Canada: American Statistical Association, pp 4382–4389, 2004; Metrika:1–18, 2006a; Presented at INTERFACE 2006, Pasadena, CA, USA, 2006b) and Stearns and Singh (Presented at Joint Statistical Meeting, MN, USA (Available on the CD), 2005; Comput Stat Data Anal 52:4253–4271, 2008). Thus, it shows that the Sir. R.A. Fisher’s brilliant idea of keeping sum of observed frequencies equal to that of expected frequencies leads to a “Honest-Balance” while weighing design weights in survey sampling. The major benefit of the proposed new estimator is that it always works unlike the pseudo empirical likelihood estimators listed in Owen (Empirical Likelihood. Chapman & Hall, London, 2001), Chen and Sitter (Stat Sin 9:385–406, 1999) and Wu (Sur Methodol 31(2):239–243, 2005). The main endeavor of this paper is to bring a change in the existing calibration technology, which is based on only positive distance functions, with a displacement function that has the flexibility of taking positive, negative, or zero value. At the end, the proposed technology has been compared with its competitors under several kinds of linear and non-linear non-parametric models using an extensive simulation study. A couple of open questions are raised.  相似文献   

8.
We consider the ability to detect interaction structure from data in a regression context. We derive an asymptotic power function for a likelihood-based test for interaction in a regression model, with possibly misspecified alternative distribution. This allows a general investigation of different types of interactions which are poorly or well detected via data. Principally we contrast pairwise-interaction models with ‘diffuse interaction models’ as introduced in Gustafson et al. (Stat Med 24:2089–2104, 2005).  相似文献   

9.
We study the problem of predicting future k-records based on k-record data for a large class of distributions, which includes several well-known distributions such as: Exponential, Weibull (one parameter), Pareto, Burr type XII, among others. With both Bayesian and non-Bayesian approaches being investigated here, we pay more attention to Bayesian predictors under balanced type loss functions as introduced by Jafari Jozani et al. (Stat Probab Lett 76:773–780, 2006a). The results are presented under the balanced versions of some well-known loss functions, namely squared error loss, Varian’s linear-exponential loss and absolute error loss or L 1 loss functions. Some of the previous results in the literatures such as Ahmadi et al. (Commun Stat Theory Methods 34:795–805, 2005), and Raqab et al. (Statistics 41:105–108, 2007) can be achieved as special cases of our results. Partial support from Ordered and Spatial Data Center of Excellence of Ferdowsi University of Mashhad is acknowledged by J. Ahmadi. M. J. Jozani’s research supported partially by a grant of Statistical Research and Training Center. é. Marchand’s research supported by NSERC of Canada. A. Parsian’s research supported by a grant of the Research Council of the University of Tehran.  相似文献   

10.
Faraz and Parsian (Stat Pap 47:569–593, 2006) have shown that the double warning lines (DWL) scheme detects process shifts more quickly than the other variable ratio sampling schemes such as variable sample sizes (VSS), variable sampling intervals (VSI) and variable sample sizes and sampling intervals (VSSVSI). In this paper, the DWL T2 control chart for monitoring the process mean vector is economically designed. The cost model proposed by Costa and Rahim (J Appl Stat 28:875–885, 2001) is used here and is minimized through a genetic algorithm (GA) approach. Then the effects of the model parameters on the chart parameters and resulting operating loss is studied and finally a comparison between all possible variable ratio sampling (VRS) schemes are made to choose the best option economically.  相似文献   

11.
In this paper, we study the asymptotic properties of simulation extrapolation (SIMEX) based variance estimation that was proposed by Wang et al. (J R Stat Soc Series B 71:425–445, 2009). We first investigate the asymptotic normality of the parameter estimator in general parametric variance function and the local linear estimator for nonparametric variance function when permutation SIMEX (PSIMEX) is used. The asymptotic optimal bandwidth selection with respect to approximate mean integrated squared error (AMISE) for nonparametric estimator is also studied. We finally discuss constructing confidence intervals/bands of the parameter/function of interest. Other than applying the asymptotic results so that normal approximation can be used, we recommend a nonparametric Monte Carlo algorithm to avoid estimating the asymptotic variance of estimator. Simulation studies are carried out for illustration.  相似文献   

12.
Let X 1, X 2, ..., X n be a random sample from a normal distribution with unknown mean μ and known variance σ 2. In many practical situations, μ is known a priori to be restricted to a bounded interval, say [−m, m] for some m > 0. The sample mean , then, becomes an inadmissible estimator for μ. It is also not minimax with respect to the squared error loss function. Minimax and other estimators for this problem have been studied by Casella and Strawderman (Ann Stat 9:870–878, 1981), Bickel (Ann Stat 9:1301–1309, 1981) and Gatsonis et al. (Stat Prob Lett 6:21–30, 1987) etc. In this paper, we obtain some new estimators for μ. The case when the variance σ 2 is unknown is also studied and various estimators for μ are proposed. Risk performance of all estimators is numerically compared for both the cases when σ 2 may be known and unknown.  相似文献   

13.
Stochastic FDH/DEA estimators for frontier analysis   总被引:2,自引:2,他引:0  
In this paper we extend the work of Simar (J Product Ananl 28:183–201, 2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic frontier analysis (even in a more flexible way), and second, it allows modelling multiple-inputs-multiple-outputs technologies without imposing parametric assumptions on production relationship, similar to what is done in non-parametric methods, like Data Envelopment Analysis (DEA), Free Disposal Hull (FDH), etc.... The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Kumbhakar et al. (J Econom 137(1):1–27, 2007) and Park et al. (J Econom 146:185–198, 2008). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH estimators. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. Even in the single-output case, our simulated examples show that our stochastic DEA/FDH improves the Kumbhakar et al. (J Econom 137(1):1–27, 2007) method, by making the resulting frontier smoother, monotonic and, if we wish, concave.  相似文献   

14.
This paper aims at comparing macroeconomic performance of three European socialist economies (Hungary, Poland, Yugoslavia) with developing and developed countries during the 1970s and the 1980s. Using panel data for 89 countries, we measure macroeconomic performance with two panel data production frontier models: the WITHIN model proposed by (Cornwell et al J Econom 46:185–200, 1990), and the firm effects model developed by (Battese and Coelli J Prod Anal 3:153–169, 1992). We conclude in favor of the underperformance of socialist countries in relation to developed countries but also to developing countries in most cases, which may be explained by the features of the socialist economic system.
Laurent WeillEmail:
  相似文献   

15.
The aim of the paper is to determine when the periodic block bootstrap, procedure introduced by Chan et al. (Technometrics 46(2):215–224, 2004), can be applied to arrays of random variables. Formal consistency is obtained under α-mixing or m-dependence conditions together with the assumption that the length of the period tends to infinity. On the other hand, if the period is constant, inconsistency is shown. The performance of periodic block bootstrap is also compared in simulations with moving block bootstrap. It is suggested that for the case of long-period data the first method is more effective and much more stable with respect to the length of the block size.  相似文献   

16.
This paper analyses efficiency drivers of a representative sample of Spanish football clubs by means of the two-stage data envelopment analysis (DEA) procedure proposed by Simar and Wilson (J Econ, 136:31–64, 2007). In the first stage, the technical efficiency of football clubs is estimated using a bootstrapped DEA model in order to establish which of them are the most efficient; the ranking is based on total productivity in the period 1996–2004. In the second stage, the Simar and Wilson (J Econ, 136:31–64, 2007) procedure is used to bootstrap the DEA scores with a truncated bootstrapped regression. Policy implications of the main findings are also considered.  相似文献   

17.
This paper generalizes Kunert and Martin’s (Ann Stat 28:1728–1742, 2000) method for finding optimal designs under a fixed interference model, to find optimal designs under a mixed interference model. The results are based on the properties of information matrices in fixed and mixed models given in Markiewicz (J Stat Plan Inference 59:127–137, 1997). The method is applied to find a design which is optimal for any given variances of random neighbor effects. Research partially supported by the KBN Grant Number 5 P03A 041 21.  相似文献   

18.
In the present investigation, a general set-up for inference from survey data that covers the estimation of variance of estimators of totals and distribution functions has been considered, using known first and second order moments of auxiliary information at the estimation stage. The traditional linear regression estimator of population total owed to Hansen et al. Sample survey methods and theory. vol. 1 & 2, New York, Wiley (1953) is shown to be unique in its class of estimators, and celebrates Golden Jubilee Year-2003 for its outstanding performance in the literature by following Singh Advanced sampling theory with applications: How Michael selected Amy, vols 1 & 2, Kluwer, The Netherlands, pp 1–1247 2003. This particular paper has been designed to repair the methodology of Rao J. Off Stat 10(2):153–165 (1994) and hence that of Singh Ann Ins Stat Math 53(2):404–417 (2001). Although there is no need of simulation study to demonstrate the superiority of the proposed technique, because the theoretical results are crystal clear, but a small scale level simulation study have been designed to show the performance of the proposed estimators over the existing estimators in the literature.  相似文献   

19.
In this article we establish characterizations of multivariate lack of memory property in terms of the hazard gradient (whenever exists), the survival function and the cumulative hazard function. Based on one of these characterizations we establish a method of generating bivariate lifetime distributions possessing bivariate lack of memory property (BLMP) with specified marginals. It is observed that the marginal distributions have to satisfy certain conditions to be stated. The method generates absolutely continuous bivariate distributions as well as those containing a singular component. Bivariate exponential distributions due to Proschan and Sullo (Reliability and biometry, pp 423–440, 1974), Freund (in J Am Stat Assoc 56:971–977, 1961), Block and Basu (J Am Stat Assoc 89:1091–1097, 1974) and Marshall and Olkin (J Am Math Assoc 62:30–44, 1967) are generated as particular cases among others using the proposed method. Some other distributions generated using the method may be of practical importance. Shock models leading to bivariate distributions possessing BLMP are given. Some closure properties of a class of univariate failure rate functions that can generate distributions possessing BLMP and of the class of bivariate survival functions having BLMP are studied.  相似文献   

20.
The question of compositional effects (that is, the effect of collective properties of a pupil body on the individual members), or Aggregated Group-Level Effects (AGLEs) as the author prefers to call them, has been the subject of considerable controversy. Some authors, e.g. Rutter et al. [Fifteen thousand hours: Secondary Schools and Their Effects on Children. London: Open Books.], Willms [Oxford Review of Education 11(1): 33–41; (1986). American Sociological Review, 51, 224–241.], Bondi [British Educational Research Journal, 17(3), 203-218.], have claimed to find such effects, while on the other hand Mortimore et al. [School Matters: the Junior Years. Wells: Open Books.] and Thomas and Mortimore [Oxford Review of Education 16(2): 137–158.] did not. Others, for example Hauser [1970], have implied that many apparent AGLEs may be spurious, while Gray et al. [Review of Research in Education, 8, 158–193.] have suggested that at least in certain circumstances such apparent effects may arise as a result of inadequate allowance for pre-existing differences. A possible statistical mechanism for this is outlined in the work of Burstein [In R. Dreeben, & J. A. Thomas (Eds.), The Analysis of Educational Productivity. Volume 1: Issues in Microanalysis, Cambridge, MASS: Ballinger, pp. 119–190] on the effect of aggregating the data when a variable is omitted from the model used. This paper suggests another way in which spurious AGLEs can arise. It shows mathematically that even if there are no omitted variables, measurement error in an explanatory variable could give rise to apparent, but spurious, AGLEs, when analysed using a multilevel modelling procedure. Using simulation methods, it investigates what the practical effects of this are likely to be, and shows that statistically significant spurious effects occur systematically under fairly standard conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号