首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 608 毫秒
1.
M. Z. Khan 《Metrika》1976,23(1):211-219
The problem of optimally allocating a sample amongk-strata at the second phase of a two-phase sampling procedure when the sampling is for proportion has been discussed byNewbold [1971] and then by the author [Khan, 1972].This paper deals with optimum allocation of the sample amongk-strata at the second phase of a two-phase sampling procedure, when the sampling is form-attributes. The problem is to estimate the proportion of each attribute in the population. Here (m–1) dimensional Dirichlet distribution is taken as the prior distribution.  相似文献   

2.
Summary When elements of a finite population are sampled with varying probability selection at each draw,Horvitz andThompson [1952] have formulated certain classes of linear estimators to bear on the problem of providing a smaple appraisal of the population total.Horvitz andThompson's T 1 class is an ordered one, which was examined by the present author [1967 b]. For some sampling procedures a best estimator exists for theT 1 class. Subsequently the present author [1967 c] appliedMurthy's technique [Murthy 1967] of unordering an ordered estimator and derived a more efficient estimator. The present paper is concerned with applyingMurthy's technique to theT 1 class itself, and examining the unorderedT 1 class. Curiously enough, it is noted that the condition of unbiasedness is sufficient to completely specify the unorderedT 1 class for the sampling procedure considered here.Research sponsored by Marathwada University, Aurangabad, India; under Grant No. Research-12-68-69/3314-16.  相似文献   

3.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

4.
N. D. Shukla 《Metrika》1976,23(1):127-133
In sample survey methods the use of product estimators was suggested byMurthy [1964] andSrivastava [1966] and were found to serve good purpose provided the two variables viz. the main variable under study and the auxiliary variable have a very high negative correlation between them. The product estimators suggested by them are biased. In the present paper the author has obtained unbiased product estimators (to the first degree of approximation) with the help of the technique developed byQuenouille [1956] and has established that this new estimator is better than the other product estimator in the mean square error sense.  相似文献   

5.
In analysing big data for finite population inference, it is critical to adjust for the selection bias in the big data. In this paper, we propose two methods of reducing the selection bias associated with the big data sample. The first method uses a version of inverse sampling by incorporating auxiliary information from external sources, and the second one borrows the idea of data integration by combining the big data sample with an independent probability sample. Two simulation studies show that the proposed methods are unbiased and have better coverage rates than their alternatives. In addition, the proposed methods are easy to implement in practice.  相似文献   

6.
In a recent paper Ghosh and Sarkar [5] have developed a model of input-output systems as spatial configurations. Roy has proposed a more efficient solution method; but computation time still increases factorially which rules out its use for large matrices. [15] This note shows that the problem they have formulated belongs to a class of discrete programming problems known as placement or assignment problems. Several natural extensions are briefly discussed. More importantly, an efficient algorithm for the quadratic assignment problem is used to compute the optimal ordering of five comparable input-output matrices (US, Norway, Japan, Italy, India). These preliminary empirical results do show rather stable assignment patterns for the industries; and certain clusters of industries are shown to emerge as hypothesized by Ghosh and Sarkar.The author wishes to thank an anonymous referee for some important clarifying remarks on a preliminary version of this paper.  相似文献   

7.
The successive sampling is a known technique that can be used in longitudinal surveys to estimate population parameters and measurements of difference or change of a study variable. The paper discusses the estimation of quantiles for the current occasion based on sampling in two successive occasions and using p-auxiliary variables obtained of the previous occasion. A multivariate ratio estimator from the matched portion is used to provide the optimum estimate of a quantile by weighting the estimates inversely to derived optimum weights. Its properties are studied under large–sample approximation and the expressions of the variances are established. The behavior of these asymptotic variances is analyzed on the basis of data from natural populations. A simulation study is also used to measure the precision of the proposed estimator.  相似文献   

8.
Counting the number of units is not always practical during the sampling of particulate materials: it is often much easier to sample a fixed volume or fixed mass of particles. Hence, a class of sampling designs is proposed which leads to samples that have approximately a constant mass or a constant volume. For these sampling designs, estimators were derived which are a ratio of arbitrary sample totals. A Taylor expansion was used to obtain a first-order approximation for the expected value and variance in the limit of a large batch-to-sample size ratio. Furthermore, a π -estimator for a ratio of batch totals was found by deriving expressions for the first- and second-order inclusion probabilities. Practical application of the π -estimator is limited because it requires inaccessible batch information. However, when the denominator of the estimated batch ratio is the batch size, the π -estimator becomes equal to a sample total divided by the sample size in the limit of a large sample-to-particle size ratio. As a consequence, the obtained sample ratio becomes an unbiased estimator for the corresponding batch ratio. Retaining unbiasedness, the Horvitz–Thompson estimator for the variance, which also contains inaccessible batch information, is replaced by an estimator containing sample information only. Practical application of this estimator is illustrated for the sampling of slag, produced during the production of steel.  相似文献   

9.
T 2 charts are used to monitor a process when more than one quality variable associated with process is being observed. Recent studies have shown that the T 2 chart with variable sampling size and sampling interval (VSSI) detects a small shift in the process mean vector faster than the traditional T 2 chart. The paper considers an economic design of the VSSI T 2 chart, in which the expected hourly loss is constructed and regarded as an objective function for optimally determining the design parameters (i.e. the maximum/minimum sample size, the longest/shortest sampling interval, and the warning/action limits) in sampling-and-charting. Furthermore, the effects of process parameters and cost parameters upon the expected hourly loss and design parameters are examined.  相似文献   

10.
Sequential estimation problems for the mean parameter of an exponential distribution has received much attention over the years. Purely sequential and accelerated sequential estimators and their asymptotic second-order characteristics have been laid out in the existing literature, both for minimum risk point as well as bounded length confidence interval estimation of the mean parameter. Having obtained a data set from such sequentially designed experiments, the paper investigates estimation problems for the associatedreliability function. Second-order approximations are provided for the bias and mean squared error of the proposed estimator of the reliability function, first under a general setup. An ad hoc bias-corrected version is also introduced. Then, the proposed estimator is investigated further under some specific sequential sampling strategies, already available in the literature. In the end, simulation results are presented for comparing the proposed estimators of the reliability function for moderate sample sizes and various sequential sampling strategies.  相似文献   

11.
The purpose of this paper is to emphasize the importance of sampling and sample size considerations in all qualitative research. Such considerations would help qualitative researchers to select sample sizes and sampling designs that are most compatible with their research purposes. First, we discuss the importance of sampling in qualitative research. Next, we outline 24 designs for selecting a sample in qualitative research. We then discuss the importance of selecting a sample size that yields data that have a realistic chance of reaching data saturation, theoretical saturation, or informational redundancy. Based on the literature, we then provide sample size guidelines for several qualitative research designs. As such, we provide a framework for making sampling and sample size considerations in interpretive research. An earlier version of this article received the 2004 Southwest Educational Research Association (SERA) Outstanding Paper Award.  相似文献   

12.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

13.
In this paper, we study the degree of business cycle synchronization by means of a small sample version of the Harding and Pagan's [Journal of Econometrics (2006) Vol. 132, pp. 59–79] Generalized Method of Moment test. We show that the asymptotic version of the test gets increasingly distorted in small samples when the number of countries grows large. However, a block bootstrapped version of the test can remedy the size distortion when the time series length divided by the number of countries T/n is sufficiently large. Applying the technique to a number of business cycle proxies of developed economies, we are unable to reject the null hypothesis of a non‐zero common multivariate synchronization index for certain economically meaningful subsets of these countries.  相似文献   

14.
Coordination of probabilistic samples is a challenging theoretical problem faced by statistical institutes. One of their aims is to obtain good estimates for each wave while spreading the response burden across the entire population. There is a collection of existing solutions that try to attend to these needs. These solutions, which were developed independently, are integrated in a general framework and their corresponding longitudinal designs are computed. The properties of these longitudinal designs are discussed. It is also noted that there is an antagonism between a good rotation and control over the cross-sectional sampling design. A compromise needs to be reached between the quality of the sample coordination, which appears to be optimal for a systematic longitudinal sampling design, and the freedom of choice of the cross-sectional design. In order to reach such a compromise, an algorithm that uses a new method of longitudinal sampling is proposed.  相似文献   

15.
In this paper, an alternative sampling procedure that is a mixture of simple random sampling and systematic sampling is proposed. It results in uniform inclusion probabilities for all individual units and positive inclusion probabilities for all pairs of units. As a result, the proposed sampling procedure enables us to estimate the population mean unbiasedly using the ordinary sample mean, and to provide an unbiased estimator of its sampling variance. It is also found that the suggested sampling procedure performs well especially when the size of simple random sample is small. Received August 2001  相似文献   

16.
Marshall-Olkin bivariate semi-Pareto distribution (MO-BSP) and Marshall-Olkin bivariate Pareto distribution (MO-BP) are introduced and studied. AR(1) and AR(k) time series models are developed with minification structure having MO-BSP stationary marginal distribution. Various characterizations are investigated.Acknowledgements. The authors thank the Editor and the referee for their valuable suggestions which led to an improved version of the original paper. The first author is grateful to the University Grants Commission of India for the support under Teacher Fellowship Scheme.  相似文献   

17.
薛宝东  梁文彬 《价值工程》2011,30(15):41-41
本文对发电厂入厂煤人工采样和机械采样的化验结果进行了对比。一系列检验证明,人工采样可以获得有代表性的样品。综合给出了人工采样的技术要点。  相似文献   

18.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

19.
R. Göb 《Metrika》1992,39(1):139-153
Investigations on acceptance sampling have attached rather few attention to defects inspection. As to sampling models and the corresponding operating characteristic (OC) function of single defects sampling plans, generally only typeB (process model) has been considered: sampling from a production process where the OC is conceived as a function of sample sizen, acceptance numberc, and process average number of defects per itemλ. For modern quality control, both the steadily increasing complexity of products and the need for differentiated cost calculation involve a clear demand for defects sampling in its practically most relevant form: lot-by-lot single sampling plans, where the OC (typeA, lot OC) is considered as a function of lot sizeN, sample sizen, acceptance numberc, number of defects in the lotK. The present paper develops two lot OC functions from suitable assumptions on the arrangements of the total numberK of defects over theN elements of the lot. Limiting theorems on these OC functions are used to justify to some extent the customary assumption that the Poisson process OC can serve as an approximation for typeA. The dependence of the OC functions on sample sizen, acceptance numberc, total number of defects in the lotK is described by simple formulae.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号