首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 750 毫秒
1.
评估普查计数的完整性已经成为五年或十年一次的人口普查不可分割的一部分。评估通常采取质量评估调查的方式。该调查建立在双系统估计量基础上。考虑到普查登记质量及人口移动因素,这三份人口登记名单可提供7个总体指标。由于质量评估调查采取抽样的方式进行,所以这些总体指标要用样本来构造其估计量。基于复杂抽样方法形成的双系统估计量没有现成的方差公式计算其方差。通常使用分层刀切方差估计量近似计算双系统估计量的方差。这需要计算第一步所有样本抽样单位的复制权数。本文将对双系统估计量构造的各个环节进行理论与实践相结合的阐述,深入解析其中深层次的理论问题,为基础理论研究做出贡献,另外也将探讨基于双系统估计量的合成估计量在区域人口数目估计中的应用问题。  相似文献   

2.
双系统估计量是目前估计目标总体真实人口数及人口普查净误差的主流方法,构造双系统估计量时,要求总体中的人口具有相同的登记概率。为达此目的,目前各国通行的做法是对人口总体进行抽样后分层。用Logistic回归模型取代抽样后分层是人口普查质量评估领域的前沿问题。本文系统地解读了基于Logistic回归模型的双系统估计量及其方差估计量,认为Logistic回归模型能够纳入更多的分层变量,具有很好的应用前景。  相似文献   

3.
《西部财会》2010,(11):19-19
保证第六次全国人口普查数据真实可靠、准确完整,是第六次全国人口普查的核心要求,也是衡量普查成功与否的重要标准。为了保障第六次全国人口普查数据质量,《全国人口普查条例》规定:一是明确普查机构、普查人员依法独立行使调查、报告、监督的职权,任何单位和个人不得十涉。地方各级人民政府、各部门、各单位及其负责人,不得自行修改普查机构和普查人员依法搜集、  相似文献   

4.
侯艳华 《活力》2012,(2):95-95
人口普查是一项重大的国情国力调查,为科学制定国民经济和社会发展规划,统筹安排人民的物质和文化生活,提供真实准确、完整及时的人口统计信息支持。而搞准人口普查数据的关键之一就是要划分好普查小区。绘制好普查区地图,做到普查小区覆盖全面、无缝连接,地图上建筑物标注不重不漏、清晰准确,只有这样才能为人口普查锁定、寻找普查对象打下坚实的基础。  相似文献   

5.
朱梅红 《数据》2002,(3):43-44
在对调查目标进行了准确界定后,就可正式进入调查设计阶段了.在调查设计阶段应考虑的主要问题有:选择调查方式、定义总体、确定调查框、明确误差来源及其影响因素. 一、调查方式:普查和抽样调查 常用的调查方式是普查和抽样调查. 普查是对总体的所有单位进行调查.如需要了解有关国情国力的基本统计数据,掌握特定社会经济现象的基本全貌,应进行普查.如人口普查、工业企业设备普查等.  相似文献   

6.
在对调查目标进行了准确界定后,就可正式进入调查设计阶段了.在调查设计阶段应考虑的主要问题有:选择调查方式、定义总体、确定调查框、明确误差来源及其影响因素. 一、调查方式:普查和抽样调查 常用的调查方式是普查和抽样调查. 普查是对总体的所有单位进行调查.如需要了解有关国情国力的基本统计数据,掌握特定社会经济现象的基本全貌,应进行普查.如人口普查、工业企业设备普查等.  相似文献   

7.
周晓娜 《数据》2010,(6):26-27
2010年是美国的人口普查年。美国人口普查起始于1790年.依照宪法每10年开展一次,从1930年起调查时间一直固定为4月1日。2010年这次普查,由美国商务部下属的人口调查局具体负责组织,预算大约140亿美元,范围包括美国联邦政府50个州的每一个社区,不论国籍,不分种族,所有在美国居住的居民均在普查范围之内,预计将有超过1.2亿个家庭收到调查表。这次人口普查的口号是“准确和完全的人口普查结果掌握在你我手中(It’s in our hands)。”  相似文献   

8.
段成荣 《数据》2010,(9):48-48
人口普查是获取人口资料、掌握国情国力的一种最基本的调查活动,被称为"国情普查"或"国势调查",受到世界各国的高度重视。  相似文献   

9.
蔡军 《数据》2012,(10):44-45
与周期性的经济普查和农业普查相比,十年一度的人口普查覆盖面最为广泛,其调查对象包括标准时点在我国境内的所有自然人以及在境外但并未定居的中国公民,因此无论是规模还是难度,人口普查都可以称得上是“普查之最”。同时,由于近年来北京市的流动人口数大量增加,  相似文献   

10.
国务院第六次全国人口普查领导小组副组长、国家统计局局长马建堂在江苏省督查人口普查工作时指出,第六次全国人口普查已进入临战状态,务必振奋精神,强化责任,全力以赴,狠抓普查经费落实,狠抓普查队伍组建,狠抓人口信息基础资料建设,狠抓普查宣传动员,做实、做细、做好各项准备工作,为高质量完成人口普查登记打下坚实的基础。  相似文献   

11.
This article considers the asymptotic estimation theory for the proportion in randomized response survey usinguncertain prior information (UPI) about the true proportion parameter which is assumed to be available on the basis of some sort of realistic conjecture. Three estimators, namely, the unrestricted estimator, the shrinkage restricted estimator and an estimator based on a preliminary test, are proposed. Their asymptotic mean squared errors are derived and compared. The relative dominance picture of the estimators is presented.  相似文献   

12.
This paper proposes a computationally simple GMM for the estimation of mixed regressive spatial autoregressive models. The proposed method explores the advantage of the method of elimination and substitution in linear algebra. The modified GMM approach reduces the joint (nonlinear) estimation of a complete vector of parameters into estimation of separate components. For the mixed regressive spatial autoregressive model, the nonlinear estimation is reduced to the estimation of the (single) spatial effect parameter. We identify situations under which the resulting estimator can be efficient relative to the joint GMM estimator where all the parameters are jointly estimated.  相似文献   

13.
Linking administrative, survey and census files to enhance dimensions such as time and breadth or depth of detail is now common. Because a unique person identifier is often not available, records belonging to two different units (e.g. people) may be incorrectly linked. Estimating the proportion of links that are correct, called Precision, is difficult because, even after clerical review, there will remain uncertainty about whether a link is in fact correct or incorrect. Measures of Precision are useful when deciding whether or not it is worthwhile linking two files, when comparing alternative linking strategies and as a quality measure for estimates based on the linked file. This paper proposes an estimator of Precision for a linked file that has been created by either deterministic (or rules‐based) or probabilistic (where evidence for a link being a match is weighted against the evidence that it is not a match) linkage, both of which are widely used in practice. This paper shows that the proposed estimators perform well.  相似文献   

14.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

15.
This paper develops a concrete formula for the asymptotic distribution of two-step, possibly non-smooth semiparametric M-estimators under general misspecification. Our regularity conditions are relatively straightforward to verify and also weaker than those available in the literature. The first-stage nonparametric estimation may depend on finite dimensional parameters. We characterize: (1) conditions under which the first-stage estimation of nonparametric components do not affect the asymptotic distribution, (2) conditions under which the asymptotic distribution is affected by the derivatives of the first-stage nonparametric estimator with respect to the finite-dimensional parameters, and (3) conditions under which one can allow non-smooth objective functions. Our framework is illustrated by applying it to three examples: (1) profiled estimation of a single index quantile regression model, (2) semiparametric least squares estimation under model misspecification, and (3) a smoothed matching estimator.  相似文献   

16.
Small area estimation is a widely used indirect estimation technique for micro‐level geographic profiling. Three unit level small area estimation techniques—the ELL or World Bank method, empirical best prediction (EBP) and M‐quantile (MQ) — can estimate micro‐level Foster, Greer, & Thorbecke (FGT) indicators: poverty incidence, gap and severity using both unit level survey and census data. However, they use different assumptions. The effects of using model‐based unit level census data reconstructed from cross‐tabulations and having no cluster level contextual variables for models are discussed, as are effects of small area and cluster level heterogeneity. A simulation‐based comparison of ELL, EBP and MQ uses a model‐based reconstruction of 2000/2001 data from Bangladesh and compares bias and mean square error. A three‐level ELL method is applied for comparison with the standard two‐level ELL that lacks a small area level component. An important finding is that the larger number of small areas for which ELL has been able to produce sufficiently accurate estimates in comparison with EBP and MQ has been driven more by the type of census data available or utilised than by the model per se.  相似文献   

17.
Deep and persistent disadvantage is an important, but statistically rare, phenomenon in the population, and sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. Survey samples are typically designed to produce estimates of population characteristics of planned areas. The sample sizes are calculated so that the survey estimator for each of the planned areas is of a desired level of precision. However, in many instances, estimators are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This has led to the development of a class of indirect estimators that make use of information from related areas through modelling. A model is used to link similar areas to enhance the estimation of unplanned areas; in other words, they borrow strength from the other areas. Doing so improves the precision of estimated characteristics in the small area, especially in areas with smaller sample sizes. Social science researchers have increasingly employed small area estimation to provide localised estimates of population characteristics from surveys. We explore how to extend this approach within the context of deep and persistent disadvantage in Australia. We find that because of the unique circumstances of the Australian population distribution, direct estimates of disadvantage have substantial variation, but by applying small area estimation, there are significant improvements in precision of estimates.  相似文献   

18.
There are three approaches for the estimation of the distribution function D(r) of distance to the nearest neighbour of a stationary point process: the border method, the Hanisch method and the Kaplan-Meier approach. The corresponding estimators and some modifications are compared with respect to bias and mean squared error (mse). Simulations for Poisson, cluster and hard-core processes show that the classical border estimator has good properties; still better is the Hanisch estimator. Typically, mse depends on r, having small values for small and large r and a maximum in between. The mse is not reduced if the exact intensity λ (if known) or intensity estimators from larger windows are built in the estimators of D(r); in contrast, the intensity estimator should have the same precision as that of λ D(r). In the case of replicated estimation from more than one window the best way of pooling the subwindow estimates is averaging by weights which are proportional to squared point numbers.  相似文献   

19.
The difficult estimation problem associated with the two-parameter negative binomial distribution is discussed. The order statistic is shown to be minimal sufficient but not complete. It is proven that there is at least one maximum likelihood estimator of the parameterk when the second sample moment is greater than the sample mean. Contours and three-dimensional graphs of the natural logarithm of the likelihood function provide further insight into the estimation problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号