首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Non‐response is a common source of error in many surveys. Because surveys often are costly instruments, quality‐cost trade‐offs play a continuing role in the design and analysis of surveys. The advances of telephone, computers, and Internet all had and still have considerable impact on the design of surveys. Recently, a strong focus on methods for survey data collection monitoring and tailoring has emerged as a new paradigm to efficiently reduce non‐response error. Paradata and adaptive survey designs are key words in these new developments. Prerequisites to evaluating, comparing, monitoring, and improving quality of survey response are a conceptual framework for representative survey response, indicators to measure deviations thereof, and indicators to identify subpopulations that need increased effort. In this paper, we present an overview of representativeness indicators or R‐indicators that are fit for these purposes. We give several examples and provide guidelines for their use in practice.  相似文献   

2.
The growth of non‐response rates for social science surveys has led to increased concern about the risk of non‐response bias. Unfortunately, the non‐response rate is a poor indicator of when non‐response bias is likely to occur. We consider in this paper a set of alternative indicators. A large‐scale simulation study is used to explore how each of these indicators performs in a variety of circumstances. Although, as expected, none of the indicators fully depict the impact of non‐response in survey estimates, we discuss how they can be used when creating a plausible account of the risks for non‐response bias for a survey. We also describe an interesting characteristic of the fraction of missing information that may be helpful in diagnosing not‐missing‐at‐random mechanisms in certain situations.  相似文献   

3.
4.
In this paper, we consider the use of auxiliary and paradata for dealing with non‐response and measurement errors in household surveys. Three over‐arching purposes are distinguished: response enhancement, statistical adjustment, and bias exploration. Attention is given to the varying focus at the different phases of statistical production from collection, processing to analysis, and how to select and utilize the useful auxiliary and paradata. Administrative register data provide the richest source of relevant auxiliary information, in addition to data collected in previous surveys and censuses. Due to their importance in effective dealings with non‐sampling errors, one should make every effort to increase their availability in the statistical system and, at the same time, to develop efficient statistical methods that capitalize on the combined data sources.  相似文献   

5.
We study parametric and non‐parametric approaches for assessing the accuracy and coverage of a population census based on dual system surveys. The two parametric approaches being considered are post‐stratification and logistic regression, which have been or will be implemented for the US Census dual system surveys. We show that the parametric model‐based approaches are generally biased unless the model is correctly specified. We then study a local post‐stratification approach based on a non‐parametric kernel estimate of the Census enumeration functions. We illustrate that the non‐parametric approach avoids the risk of model mis‐specification and is consistent under relatively weak conditions. The performances of these estimators are evaluated numerically via simulation studies and an empirical analysis based on the 2000 US Census post‐enumeration survey data.  相似文献   

6.
We study the generalized bootstrap technique under general sampling designs. We focus mainly on bootstrap variance estimation but we also investigate the empirical properties of bootstrap confidence intervals obtained using the percentile method. Generalized bootstrap consists of randomly generating bootstrap weights so that the first two (or more) design moments of the sampling error are tracked by the corresponding bootstrap moments. Most bootstrap methods in the literature can be viewed as special cases. We discuss issues such as the choice of the distribution used to generate bootstrap weights, the choice of the number of bootstrap replicates, and the potential occurrence of negative bootstrap weights. We first describe the generalized bootstrap for the linear Horvitz‐Thompson estimator and then consider non‐linear estimators such as those defined through estimating equations. We also develop two ways of bootstrapping the generalized regression estimator of a population total. We study in greater depth the case of Poisson sampling, which is often used to select samples in Price Index surveys conducted by national statistical agencies around the world. For Poisson sampling, we consider a pseudo‐population approach and show that the resulting bootstrap weights capture the first three design moments of the sampling error. A simulation study and an example with real survey data are used to illustrate the theory.  相似文献   

7.
We review three alternative approaches to modelling survey non‐contact and refusal: multinomial, sequential, and sample selection (bivariate probit) models. We then propose a multilevel extension of the sample selection model to allow for both interviewer effects and dependency between non‐contact and refusal rates at the household and interviewer level. All methods are applied and compared in an analysis of household non‐response in the United Kingdom, using a data set with unusually rich information on both respondents and non‐respondents from six major surveys. After controlling for household characteristics, there is little evidence of residual correlation between the unobserved characteristics affecting non‐contact and refusal propensities at either the household or the interviewer level. We also find that the estimated coefficients of the multinomial and sequential models are surprisingly similar, which further investigation via a simulation study suggests is due to non‐contact and refusal having largely different predictors.  相似文献   

8.
We randomly assigned eight different consumption surveys to obtain evidence on the nature of measurement errors in estimates of household consumption. Regressions using data from more error‐prone designs are compared with results from a ‘gold standard’ survey. Measurement errors appear to have a mean‐reverting negative correlation with true consumption, especially for food and especially for rural households.  相似文献   

9.
This paper compares the responses of consumers who submitted answers to a survey instrument focusing on Internet purchasing patterns both electronically and using traditional paper response methods. We present the results of a controlled experiment within a larger data collection effort. The same survey instrument was completed by 416 Internet customers of a major office supplies company, with approximately 60% receiving the survey in paper form and 40% receiving the electronic version. In order to evaluate the efficacy of electronic surveys relative to traditional, printed surveys we conduct two levels of analysis. On a macro-level, we compare the two groups for similarity in terms of fairly aggregate, coarse data characteristics such as response rates, proportion of missing data, scale means and inter-item reliability. On a more fine-grained, micro-level, we compare the two groups for aspects of data integrity such as the presence of data runs and measurement errors. This deeper, finer-grained analysis allows an examination of the potential benefits and flaws of electronic data collection.Our findings suggest that electronic surveys are generally comparable to print surveys in most respects, but that there are a few key advantages and challenges that researchers should evaluate. Notably, our sample indicates that electronic surveys have fewer missing responses and can be coded/presented in a more flexible manner (namely, contingent coding with different respondents receiving different questions depending on the response to earlier questions) that offers researchers new capabilities.  相似文献   

10.
The ability to design experiments in an appropriate and efficient way is an important skill, but students typically have little opportunity to get that experience. Most textbooks introduce standard general‐purpose designs, and then proceed with the analysis of data already collected. In this paper we explore a tool for gaining design experience: computer‐based virtual experiments. These are software environments which mimic a real situation of interest and invite the user to collect data to answer a research question. Two prototype environments are described. The first one is suitable for a course that deals with screening or response surface designs, the second one allows experimenting with block and row‐column designs. They are parts of a collection we developed called ENV2EXP, and can be freely used over the web. We also describe our experience in using them in several courses over the last few years.  相似文献   

11.
Without accounting for sensitive items in sample surveys, sampled units may not respond (nonignorable nonresponse) or they respond untruthfully. There are several survey designs that address this problem and we will review some of them. In our study, we have binary data from clusters within small areas, obtained from a version of the unrelated‐question design, and the sensitive proportion is of interest for each area. A hierarchical Bayesian model is used to capture the variation in the observed binomial counts from the clusters within the small areas and to estimate the sensitive proportions for all areas. Both our example on college cheating and a simulation study show significant reductions in the posterior standard deviations of the sensitive proportions under the small‐area model as compared with an analogous individual‐area model. The simulation study also demonstrates that the estimates under the small‐area model are closer to the truth than for the corresponding estimates under the individual‐area model. Finally, for small areas, we discuss many extensions to accommodate covariates, finite population sampling, multiple sensitive items and optional designs.  相似文献   

12.
Applied microeconomic researchers are beginning to use long‐term retrospective survey data in settings where conventional longitudinal survey data are unavailable. However, inaccurate long‐term recall could induce non‐classical measurement error, for which conventional statistical corrections are less effective. In this article, we use the unique Panel Study of Income Dynamics Validation Study to assess the accuracy of long‐term retrospective recall data. We find underreporting of transitory variation which creates a non‐classical measurement error problem.  相似文献   

13.
When sensitive issues are surveyed, collecting truthful data and obtaining reliable estimates of population parameters is a persistent problem in many fields of applied research mostly in sociological, economic, demographic, ecological and medical studies. In this context, and moving from the so‐called negative survey, we consider the problem of estimating the proportion of population units belonging to the categories of a sensitive variable when collected data are affected by measurement errors produced by untruthful responses. An extension of the negative survey approach is proposed herein in order to allow respondents to release a true response. The proposal rests on modelling the released data with a mixture of truthful and untruthful responses that allows researchers to obtain an estimate of the proportions as well as the probability of receiving the true response by implementing the EM‐algorithm. We describe the estimation procedure and carry out a simulation study to assess the performance of the EM estimates vis‐à‐vis certain benchmark values and the estimates obtained under the traditional data‐collection approach based on direct questioning that ignores the presence of misreporting due to untruthful responding. Simulation findings provide evidence on the accuracy of the estimates and permit us to appreciate the improvements that our approach can produce in public surveys, particularly in election opinion polls, when the hidden vote problem is present.  相似文献   

14.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

15.
We consider the possibility that demographic variables are measured with errors which arise because household surveys measure demographic structures at a point‐in‐time, whereas household composition evolves throughout the survey period. We construct and estimate sharp bounds on household size and find that the degree of these measurement errors is non‐trivial. These errors have the potential to resolve the Deaton–Paxson paradox, but fail to do so.  相似文献   

16.
In this paper, we present a practical methodology for variance estimation for multi‐dimensional measures of poverty and deprivation of households and individuals, derived from sample surveys with complex designs and fairly large sample sizes. The measures considered are based on fuzzy representation of individuals' propensity to deprivation in monetary and diverse non‐monetary dimensions. We believe this to be the first original contribution for estimating standard errors for such fuzzy poverty measures. The second objective is to describe and numerically illustrate computational procedures and difficulties in producing reliable and robust estimates of sampling error for such complex statistics. We attempt to identify some of these problems and provide solutions in the context of actual situations. A detailed application based on European Union Statistics on Income and Living Conditions data for 19 NUTS2 regions in Spain is provided.  相似文献   

17.
Many developments have occurred in the practice of survey sampling and survey methodology in the past 60 years or so. These developments have been partly driven by the emergence of computers and the continuous growth in computer power over the years and partly by the increasingly sophisticated demands from the users of survey data. The paper reviews these developments with a main emphasis on survey sampling issues for the design and analysis of social surveys. Design‐based inference based on probability samples was the predominant approach in the early years, but over time, that predominance has been eroded by the need to employ model‐dependent methods to deal with missing data and to satisfy analysts' demands for survey estimates that cannot be met with design‐based methods. With the continuous decline in response rates that has occurred in recent years, much current research has focused on the use of non‐probability samples and data collected from administrative records and web surveys.  相似文献   

18.
In order to increase data quality some household surveys visit the respondent households several times to estimate one measure of consumption. For example, in Ghanaian Living Standards Measurement surveys, households are visited up to 10 times over a period of 1 month. I find strong evidence for conditioning effects as a result of this approach: In the Ghanaian data the estimated level of consumption is a function of the number of prior visits, with consumption being highest in the earlier survey visits. Telescoping (perceiving events as being more recent than they are) or seasonality (first‐of‐the‐month effects) cannot explain the observed pattern. To study whether earlier or later survey visits are of higher quality, I employ a strategy based on Benford's law. Results suggest that the consumption data from earlier survey visits are of higher quality than data from later visits. The findings have implications for the value of additional visits in household surveys, and also shed light on possible measurement problems in high‐frequency panels. They add to a recent literature on measurement errors in consumption surveys (Beegle et al., 2012 , Gibson et al., 2015 ), and complement findings by Zwane et al. ( 2011 ) regarding the effect of surveys on subsequent behaviour.  相似文献   

19.
The World Wide Web (WWW) is increasingly being used as a tool and platform for survey research. Two types of electronic or online surveys available for data collection are the email and Web based survey, and they constitute the focus of this paper. We address a multitude of issues researchers should consider before and during the use of this method of data collection: advantages and liabilities with this form of survey research, sampling problems, questionnaire design considerations, suggestions in approaching potential respondents, response rates and aspects of data processing. Where relevant, the methodological issues involved are illustrated with examples from our own research practice. This methods review shows that most challenges are resolved when taking into account the principles that guide the conduct of conventional surveys.  相似文献   

20.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号