首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose and methodology of the Norwegian census has changed considerably during the last 35 years. While the census was previously the main source of socio-demographic information, it is to day just one of several sources. After an identification number for each individual was introduced and used in various administrative registers, the dominating role of the census has changed dramatically. For some years it has been the policy of Statistics Norway to collaborate with various governmental agencies in order to use administrative registers in statistics production. This policy has been supported politically, and a new Statistics Act has been useful in these efforts. The purpose of this paper is to present the strategy and methodology used to produce statistics in general, census statistics in particular, when based on a combined use of administrative registers and directly collected data. Experiences from Norwegian censuses since 1960 will be presented.  相似文献   

2.
This paper is a review of some applications of the combination of data sets, such as combining census or administrative data and survey data, constructing expanded data sets through linkage, combining large‐scale commercial databases with survey data and harnessing designed data collection to be able to make use of non‐probability samples. It is aimed to highlight their commonalities and differences and to formulate some general principles for data set combination.  相似文献   

3.
Risk‐utility formulations for problems of statistical disclosure limitation are now common. We argue that these approaches are powerful guides to official statistics agencies in regard to how to think about disclosure limitation problems, but that they fall short in essential ways from providing a sound basis for acting upon the problems. We illustrate this position in three specific contexts—transparency, tabular data and survey weights, with shorter consideration of two key emerging issues—longitudinal data and the use of administrative data to augment surveys.  相似文献   

4.
So far, statistics has mainly relied on information collected from censuses and sample surveys, which are used to produce statistics about selected characteristics of the population. However, because of cost cuts and increasing non‐response in sample surveys, statisticians have started to search for new sources of information, such as registers, Internet data sources (IDSs, i.e. web portals) or big data. Administrative sources are already used for purposes of official statistics, while the suitability of the latter two sources is currently being discussed in the literature. Unfortunately, only a few papers devoted to statistical theory point out methodological problems related to the use of IDSs, particularly in the context of survey methodology. The unknown generation mechanism and the complexity of such data are often neglected in view of their size. Hence, before IDSs can be used for statistical purposes, especially for official statistics, they need to be assessed in terms of such fundamental issues as representativeness, non‐sampling errors or bias. The paper attempts to fill the first gap by proposing a two‐step procedure to measure representativeness of IDSs. The procedure will be exemplified using data about the secondary real estate market in Poland.  相似文献   

5.
Applied microeconomic researchers are beginning to use long‐term retrospective survey data in settings where conventional longitudinal survey data are unavailable. However, inaccurate long‐term recall could induce non‐classical measurement error, for which conventional statistical corrections are less effective. In this article, we use the unique Panel Study of Income Dynamics Validation Study to assess the accuracy of long‐term retrospective recall data. We find underreporting of transitory variation which creates a non‐classical measurement error problem.  相似文献   

6.
Many National Statistical Institutes (NSIs), especially in Europe, are moving from single-source statistics to multi-source statistics. By combining data sources, NSIs can produce more detailed and more timely statistics and respond more quickly to events in society. By combining survey data with already available administrative data and Big Data, NSIs can save data collection and processing costs and reduce the burden on respondents. However, multi-source statistics come with new problems that need to be overcome before the resulting output quality is sufficiently high and before those statistics can be produced efficiently. What complicates the production of multi-source statistics is that they come in many different varieties as data sets can be combined in many different ways. Given the rapidly increasing importance of producing multi-source statistics in Official Statistics, there has been considerable research activity in this area over the last few years, and some frameworks have been developed for multi-source statistics. Useful as these frameworks are, they generally do not give guidelines to which method could be applied in a certain situation arising in practice. In this paper, we aim to fill that gap, structure the world of multi-source statistics and its problems and provide some guidance to suitable methods for these problems.  相似文献   

7.
The analysis of long-term social and political developments in Western countries is often difficult because of a lack of sufficient survey data. Almost always official election and census statistics are available over long periods, yet the use of these data for individual-level inferences runs the risk of the ‘ecological fallacy’. In this paper we propose a method to go beyond the fallacy, the Duncan-Davis technique for area-classified data. The method is discussed and used to assess the amount of religious voting among Dutch Catholics in the 1971 general election. While the technique is only moderately helpful in this case, it is expected to be far more useful for the analysis of older elections.  相似文献   

8.
Small area estimation is a widely used indirect estimation technique for micro‐level geographic profiling. Three unit level small area estimation techniques—the ELL or World Bank method, empirical best prediction (EBP) and M‐quantile (MQ) — can estimate micro‐level Foster, Greer, & Thorbecke (FGT) indicators: poverty incidence, gap and severity using both unit level survey and census data. However, they use different assumptions. The effects of using model‐based unit level census data reconstructed from cross‐tabulations and having no cluster level contextual variables for models are discussed, as are effects of small area and cluster level heterogeneity. A simulation‐based comparison of ELL, EBP and MQ uses a model‐based reconstruction of 2000/2001 data from Bangladesh and compares bias and mean square error. A three‐level ELL method is applied for comparison with the standard two‐level ELL that lacks a small area level component. An important finding is that the larger number of small areas for which ELL has been able to produce sufficiently accurate estimates in comparison with EBP and MQ has been driven more by the type of census data available or utilised than by the model per se.  相似文献   

9.
A national census provides important information on a country's population that is used in government planning and to underpin the national statistical system. Therefore, the quality of such information is paramount but is not as simple as the crude accuracy of population totals. Furthermore, changes in the pace and nature of modern life, such as the growing geographical mobility of the population, increasingly pose challenges to census practice and data quality. More recently, even the need for a census has been questioned on grounds of financial austerity and widespread availability of alternative population information sources. This article reviews how the modern census originated and how it evolved to confront these challenges, driven by indicators of quality and needs of users, and provides reflections on the future of the census within the national statistical infrastructure. To illustrate our discussions, we use case studies from a diverse range of national contexts. We demonstrate the implications that a country's needs, circumstances and experiences have on the census approach and practice while identifying the fundamental demographic assumptions.  相似文献   

10.
Changes in circumstances put pressure on Statistics Netherlands (SN) to redesign the way its statistics are produced. Key developments are: the changing needs of data‐users, growing competition, pressure to reduce the survey burden on enterprises, emerging new technologies and methodologies and, first and foremost, the need for more efficiency because of budget cuts. This paper describes how SN, and especially its business statistics, can adapt to these new circumstances. We envisage an optimum situation as one with a single standardised production line for all statistics and a central data repository at its core. This single production line is supported by generic and standardised tools, metadata and workflow management. However, it is clear that such an optimum situation cannot be realised in just a few years. It should be seen as the point on the horizon. Therefore, we also describe the first transformation steps from the product‐based stovepipe‐oriented statistical process of the past to a more integrated process of the future. A similar modernisation process exists in the area of social statistics. In the near future both systems of business and social statistics are expected to connect at pivotal points and eventually converge on one overall business architecture for SN. Discussions about such an overall business architecture for SN have already been started and the first core projects have been set up.  相似文献   

11.
Macro‐integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro‐integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro‐integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non‐negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.  相似文献   

12.
Survey Estimates by Calibration on Complex Auxiliary Information   总被引:1,自引:0,他引:1  
In the last decade, calibration estimation has developed into an important field of research in survey sampling. Calibration is now an important methodological instrument in the production of statistics. Several national statistical agencies have developed software designed to compute calibrated weights based on auxiliary information available in population registers and other sources. This paper reviews some recent progress and offers some new perspectives. Calibration estimation can be used to advantage in a range of different survey conditions. This paper examines several situations, including estimation for domains in one‐phase sampling, estimation for two‐phase sampling, and estimation for two‐stage sampling with integrated weighting. Typical of those situations is complex auxiliary information, a term that we use for information made up of several components. An example occurs when a two‐stage sample survey has information both for units and for clusters of units, or when estimation for domains relies on information from different parts of the population. Complex auxiliary information opens up more than one way of computing the final calibrated weights to be used in estimation. They may be computed in a single step or in two or more successive steps. Depending on the approach, the resulting estimates do differ to some degree. All significant parts of the total information should be reflected in the final weights. The effectiveness of the complex information is mirrored by the variance of the resulting calibration estimator. Its exact variance is not presentable in simple form. Close approximation is possible via the corresponding linearized statistic. We define and use automated linearization as a shortcut in finding the linearized statistic. Its variance is easy to state, to interpret and to estimate. The variance components are expressed in terms of residuals, similar to those of standard regression theory. Visual inspection of the residuals reveals how the different components of the complex auxiliary information interact and work together toward reducing the variance.  相似文献   

13.
This paper describes the compilation of the use table for imported goods and the valuation matrix of trade margins for Belgium in 1995. It introduces the methodological novelty of integrating the compilation of both tables and systematically exploiting the fact that large import and export flows do not generate trade margins. This is notably the case for direct imports for intermediate consumption or investment by non-traders, and direct exports by producers. For identifying these trade flows, extensive use was made of intrastat and extrastat data. The results are compared with those of a proportional distribution of imports and trade margins. Many statistical offices resort to the latter approach because of a lack of survey data on the destination of trade margins and imports. We demonstrate that the integrated approach can improve the quality of both the import matrix and the valuation matrix for trade margins, while using only existing data sources.  相似文献   

14.
生活源产排污系数在污染源普查中的应用分析   总被引:1,自引:0,他引:1  
第一次全国污染源普查工作办公室编写的《第一次全国污染源普查生活源产排污系数手册》,为第一次全国污染源普查生活源的污染物产生量和排放量的计算提供了依据。文章通过实例分析生活源产排污系数在污染源普查中的应用。  相似文献   

15.
Non‐response is a common source of error in many surveys. Because surveys often are costly instruments, quality‐cost trade‐offs play a continuing role in the design and analysis of surveys. The advances of telephone, computers, and Internet all had and still have considerable impact on the design of surveys. Recently, a strong focus on methods for survey data collection monitoring and tailoring has emerged as a new paradigm to efficiently reduce non‐response error. Paradata and adaptive survey designs are key words in these new developments. Prerequisites to evaluating, comparing, monitoring, and improving quality of survey response are a conceptual framework for representative survey response, indicators to measure deviations thereof, and indicators to identify subpopulations that need increased effort. In this paper, we present an overview of representativeness indicators or R‐indicators that are fit for these purposes. We give several examples and provide guidelines for their use in practice.  相似文献   

16.
Modeling the stock price development as a geometric Brownian motion or, more generally, as a stochastic exponential of a diffusion, requires the use of specific statistical methods. For instance, the observations seldom reach us in the form of a continuous record and we are led to infer about diffusion coefficients from discrete time data. Next, often the classical assumption that the volatility is constant has to be dropped. Instead, a range of various stochastic volatility models is formed by the limiting transition from known volatility models in discrete time towards their continuous time counterparts. These are the main topics of the present survey. It is closed by a quick look beyond the usual Gaussian world in continuous time modeling by allowing a Levy process to be the driving process.  相似文献   

17.
This article investigates the quality of register data in the context of a standardized quality framework. The special focus of this work lies on the assessment of census data and how to deal with uncertainty that arises from multiple sources (registers). To take the uncertainty associated with support and conflict between several registers into account, Dempster–Shafer's theory of evidence is applied. This ‘fuzzy’ approach allows us to investigate the quality of databases with multiple underlying sources for a single attribute and to provide both quality measures and plausibility intervals.  相似文献   

18.
The use of auxiliary variables to improve the efficiency of estimators is a well‐known strategy in survey sampling. Typically, the auxiliary variables used are the totals of appropriate measurement that are exactly known from registers or administrative sources. Increasingly, however, these totals are estimated from surveys and are then used to calibrate estimators and improve their efficiency. We consider different types of survey structures and develop design‐based estimators that are calibrated on known as well as estimated totals of auxiliary variables. The optimality properties of these estimators are studied. These estimators can be viewed as extensions of the Montanari generalised regression estimator adapted to the more complex situations. The paper studies interesting special cases to develop insights and guidelines to properly manage the survey‐estimated auxiliary totals.  相似文献   

19.
In this paper, we consider the use of auxiliary and paradata for dealing with non‐response and measurement errors in household surveys. Three over‐arching purposes are distinguished: response enhancement, statistical adjustment, and bias exploration. Attention is given to the varying focus at the different phases of statistical production from collection, processing to analysis, and how to select and utilize the useful auxiliary and paradata. Administrative register data provide the richest source of relevant auxiliary information, in addition to data collected in previous surveys and censuses. Due to their importance in effective dealings with non‐sampling errors, one should make every effort to increase their availability in the statistical system and, at the same time, to develop efficient statistical methods that capitalize on the combined data sources.  相似文献   

20.
Non-specialists might have the feeling that building statistics on businesses is a very simple task: it seems one just has to "add facts". But for survey statisticians, business statistics are extremely complex: great heterogeneity of the universe, definition of the statistical units, difficulty of classifying businesses, quality of the register, variety of accounting standards, sample co-ordination, or reduction of survey burden. This paper argues that the issues raised by business survey methodology are conceptual and not only practical. It describes different aspects of statistical processing and tries to analyse the special features of business statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号