首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Changes in circumstances put pressure on Statistics Netherlands (SN) to redesign the way its statistics are produced. Key developments are: the changing needs of data‐users, growing competition, pressure to reduce the survey burden on enterprises, emerging new technologies and methodologies and, first and foremost, the need for more efficiency because of budget cuts. This paper describes how SN, and especially its business statistics, can adapt to these new circumstances. We envisage an optimum situation as one with a single standardised production line for all statistics and a central data repository at its core. This single production line is supported by generic and standardised tools, metadata and workflow management. However, it is clear that such an optimum situation cannot be realised in just a few years. It should be seen as the point on the horizon. Therefore, we also describe the first transformation steps from the product‐based stovepipe‐oriented statistical process of the past to a more integrated process of the future. A similar modernisation process exists in the area of social statistics. In the near future both systems of business and social statistics are expected to connect at pivotal points and eventually converge on one overall business architecture for SN. Discussions about such an overall business architecture for SN have already been started and the first core projects have been set up.  相似文献   

2.
In data integration contexts, two statistical agencies seek to merge their separate databases into one file. The agencies also may seek to disseminate data to the public based on the integrated file. These goals may be complicated by the agencies' need to protect the confidentiality of database subjects, which could be at risk during the integration or dissemination stage. This article proposes several approaches based on multiple imputation for disclosure limitation, usually called synthetic data, that could be used to facilitate data integration and dissemination while protecting data confidentiality. It reviews existing methods for obtaining inferences from synthetic data and points out where new methods are needed to implement the data integration proposals.  相似文献   

3.
Statistical agencies are keen to devise ways to provide research access to data while protecting confidentiality. Although methods of statistical disclosure risk assessment are now well established in the statistical science literature, the integration of these methods by agencies into a general scientific basis for their practice still proves difficult. This paper seeks to review and clarify the role of statistical science in the conceptual foundations of disclosure risk assessment in an agency’s decision making. Disclosure risk is broken down into disclosure potential, a measure of the ability to achieve true disclosure, and disclosure harm. It is argued that statistical science is most suited to assessing the former. A framework for this assessment is presented. The paper argues that the intruder’s decision making and behaviour may be separated from this framework, provided appropriate account is taken of the nature of potential intruder attacks in the definition of disclosure potential.  相似文献   

4.
高校图书馆为中小企业信息服务的实践与思考   总被引:10,自引:0,他引:10  
文章通过本校图书馆利用自身蕴藏的资源优势为本地中小企业开展信息服务的尝试,发现有其不足的一面,为了弥补缺陷,更好地发挥自身的优势,我们与本地其他信息机构以合作的方式提供信息服务,并在此基础上,提出联合服务的一些思考。  相似文献   

5.
This paper provides a review of common statistical disclosure control (SDC) methods implemented at statistical agencies for standard tabular outputs containing whole population counts from a census (either enumerated or based on a register). These methods include record swapping on the microdata prior to its tabulation and rounding of entries in the tables after they are produced. The approach for assessing SDC methods is based on a disclosure risk–data utility framework and the need to find a balance between managing disclosure risk while maximizing the amount of information that can be released to users and ensuring high quality outputs. To carry out the analysis, quantitative measures of disclosure risk and data utility are defined and methods compared. Conclusions from the analysis show that record swapping as a sole SDC method leaves high probabilities of disclosure risk. Targeted record swapping lowers the disclosure risk, but there is more distortion of distributions. Small cell adjustments (rounding) give protection to census tables by eliminating small cells but only one set of variables and geographies can be disseminated in order to avoid disclosure by differencing nested tables. Full random rounding offers more protection against disclosure by differencing, but margins are typically rounded separately from the internal cells and tables are not additive. Rounding procedures protect against the perception of disclosure risk compared to record swapping since no small cells appear in the tables. Combining rounding with record swapping raises the level of protection but increases the loss of utility to census tabular outputs. For some statistical analysis, the combination of record swapping and rounding balances to some degree opposing effects that the methods have on the utility of the tables.  相似文献   

6.
This paper presents a summary of the current state of research on reducing the risk of disclosure related to what may be called “non‐traditional” outputs for statistical agencies. Whereas traditional outputs include frequency tables, magnitude tables and public use microdata files, non‐traditional outputs include outputs associated with user‐defined exploratory data analysis and statistical modelling offered through a remote analysis system. In remote analysis, a system accepts a query from an analyst, runs it on data held in a secure environment, and then returns the results to the analyst. There is a considerable current interest in fully automated remote analysis systems, because these have the potential to enable agencies to respond to growing researcher demand for more and more detailed data. In practice, a range of protective measures is most effective in remote analysis, and the choice of this range depends heavily on the context including the regulatory environment, the dataset itself, and the purpose of the access. This paper provides a summary of known attack methods on remote analysis system outputs, focussing on exploratory data analysis and linear regression. The paper also summarizes the associated suggested protective measures designed to prevent disclosures and thwart attacks in fully automated remote analysis systems. Some commentary on the attacks and measures is provided.  相似文献   

7.
基于专家系统的自动化立体仓库出入库调度研究   总被引:11,自引:0,他引:11  
通过对自动化立体仓库作业调度和货位分配原则的归纳总结,建立了专家系统的知识库:在此基础上,根据知识库的特征及立体仓库出入库调度的特点,建立了相应的推理机制;最后通过计算机算例仿真,探讨了专家系统在立体仓库调度中应用的可行性。  相似文献   

8.
ABSTRACT The central question addressed in this paper is ‘Why have organizational strategies emerged in the public sector?’ Two broad answers are suggested. First, ‘strategies’ profile the organization through identifying aims, outputs and outcomes. Public services must, now, provide such transparency in order to secure on‐going funding from government bodies. Once ‘strategies’ are being produced, they also offer an organizational vision that potential additional funding agencies can buy into (with both commitment and money). And public services are short of resources. Second, ‘strategies’ signal greater devolved responsibility in the public sector for both acquiring resources and achieving results. They enable the inclusion of managerial priorities and values in setting the direction of public services. And politicians desire more control over the professionals that dominate public services whilst, simultaneously, wanting to make them more responsible for outcomes. This article explores the growth of strategic planning in a particular area of the public sector – the national parks. Strategies as ‘dormant documents’ and strategies as ‘funding pitches’ are discussed. It is suggested that, in the public sector, strategies should be the object of strategy.  相似文献   

9.
Vast amounts of data that could be used in the development and evaluation of policy for the benefit of society are collected by statistical agencies. It is therefore no surprise that there is very strong demand from analysts, within business, government, universities and other organisations, to access such data. When allowing access to micro‐data, a statistical agency is obliged, often legally, to ensure that it is unlikely to result in the disclosure of information about a particular person or organisation. Managing the risk of disclosure is referred to as statistical disclosure control (SDC). This paper describes an approach to SDC for output from analysis using generalised linear models, including estimates of regression parameters and their variances, diagnostic statistics and plots. The Australian Bureau of Statistics has implemented the approach in a remote analysis system, which returns analysis output from remotely submitted queries. A framework for measuring disclosure risk associated with a remote server is proposed. The disclosure risk and utility of approach are measured in two real‐life case studies and in simulation.  相似文献   

10.
11.
By means of an integration of decision theory and probabilistic models, we explore and develop methods for improving data privacy. Our work encompasses disclosure control tools in statistical databases and privacy requirements prioritization; in particular we propose a Bayesian approach for the on-line auditing in Statistical Databases and Pairwise Comparison Matrices for privacy requirements prioritization. The first approach is illustrated by means of examples in the context of statistical analysis on the census and medical data, where no salary (resp. no medical information), that could be related to a specific employee (resp. patient), must be released; the second approach is illustrated by means of examples, such as an e-voting system and an e-banking service that have to satisfy privacy requirements in addition to functional and security ones. Several fields in the social sciences, economics and engineering will benefit from the advances in this research area: e-voting, e-government, e-commerce, e-banking, e-health, cloud computing and risk management are a few examples of applications for the findings of this research.  相似文献   

12.
Macro‐integration is the process of combining data from several sources at an aggregate level. We review a Bayesian approach to macro‐integration with special emphasis on the inclusion of inequality constraints. In particular, an approximate method of dealing with inequality constraints within the linear macro‐integration framework is proposed. This method is based on a normal approximation to the truncated multivariate normal distribution. The framework is then applied to the integration of international trade statistics and transport statistics. By combining these data sources, transit flows can be derived as differences between specific transport and trade flows. Two methods of imposing the inequality restrictions that transit flows must be non‐negative are compared. Moreover, the figures are improved by imposing the equality constraints that aggregates of incoming and outgoing transit flows must be equal.  相似文献   

13.
Focusing on a specific government agency, the Internal Revenue Service, and using publicly available data for the years 1956–1982, this article investigates two issues. First, because government agencies produce services which, in general, are not sold in markets, the appropriate measure and weighting of outputs is not obvious. Thus, it is of interest to know how sensitive an index of labor productivity is to the way in which output is measured. Second, in computing labor productivity, an index number formula must be selected. Dispersion among the index numbers is not uncommon, making the choice among formulas an important issue. Results indicate that the choice of both the output measure and index formula are important.The refereeing process of this paper was handled through F. Førsund.  相似文献   

14.
Statistics Canada's multi-factor productivity accounts are integrated into the Canadian system of national accounts. The company's originality rests, in part, on the application of the standard productivity formula to alternative but related sets of outputs and inputs in a bottom-up approach that covers the whole business sector. The concept of vertical integration plays a central role in establishing relationships between alternative indices, including the relationships between static and dynamic indices. In the static framework, the stock of capital is exogenous. In the dynamic framework, capital goods become endogenous produced inputs. Establishments are seen as exchanging capital services across time periods. Time becomes a primary input of production, the productivity of which is associated with technical knowledge. A new measure of capital services and an extended definition of economic efficiency are finally introduced, which solve some paradoxical results that are obtained with the conentional measure.  相似文献   

15.
The maintenance of semantic consistency between numerous heterogeneous electronic product catalogues (EPC) that are distributed, autonomous, interdependent and emergent on the Internet is an unsolved issue for the existing heterogeneous EPC integration approaches. This article attempts to solve this issue by conceptually designing an interoperable EPC (IEPC) system through a proposed novel collaborative conceptualisation approach. This approach introduces collaboration into the heterogeneous EPC integration. It implies much potential for future e-marketplace research. It theoretically answers why real-world EPCs are so complex, how these complex EPCs can be explained and articulated in a product map theory for heterogeneous EPC integration, how a semantic consistency maintenance model can be created to satisfy the three heterogeneous EPC integration conditions and implemented by adopting a collaborative integration strategy on a collaborative concept exchange model, and how this collaborative integration strategy can be realised on a collaboration mechanism. This approach has been validated through a theoretical justification and its applicability has been demonstrated in two prototypical e-business applications.  相似文献   

16.
This study seeks to outline activities of training and placement agencies in India aimed at employment of persons with a disability. We contend that persons with a disability are an underutilized human resource and that utilizing their abilities should be a key part of an inclusive approach to talent management. As there is little empirical research on this subject, our approach is exploratory and we seek to create a platform for further studies. A key finding of the study is the preference of agencies to engage in non-traditional and ad hoc approaches to build and showcase underutilized talent of those with a disability. Based on present findings and the contextual approach to talent management, a more comprehensive agenda for future research areas in inclusive talent management is outlined.  相似文献   

17.
Efficiency evaluation of a Decision Making Unit (DMU) involves two issues: 1) selection of an appropriate reference plan against which to evaluate the DMU and 2) measurement of performance slack. In the literature, these issues are mixed in one and the same operation but we argue that it has theoretical as well as practical advantages to separate them. We provide an axiomatic characterization of the implicit Farrell selection. This approach, ignores important aspects of the technology by focussing on proportional variations in inputs (or outputs). We propose a new approach where potential improvements are used to guide the selection of reference plans. A characterization of this approach is provided and an associated translation invariant, strictly monotonous and continuous efficiency index is suggested.  相似文献   

18.
Data Envelopment Analysis (DEA) is a methodology that computes efficiency values for decision making units (DMU) in a given period by comparing the outputs with the inputs. In many applications, inputs and outputs of DMUs are monitored over time. There might be a time lag between the consumption of inputs and the production of outputs. We develop an approach that aims to capture the time lag between the outputs and the inputs in assigning the efficiency values to DMUs. We propose using weight restrictions in conjunction with the model. Our computational results on randomly generated problems demonstrate that the developed approach works well under a large variety of experimental conditions. We also apply our approach on a real data set to evaluate research institutions.  相似文献   

19.
Microeconomic reform has put the relationship between government and the third sector in Australia on a course of inevitable change. This article examines the key government reports that have been influential in shaping the microeconomic reform agenda in Australia. The objectives driving this agenda are greater accountability, increased effectiveness, improved efficiency and preventing the capture of programme funding by sectional interests. In the process, a new relationship between government and the third sector based on mechanisms to separate funding and service provision is becoming apparent. It is characterized by an increasingly contractual nature, frequent review, contestability, performance management requirements and an emphasis on competition rather than collaboration. As a consequence, there has been a shift in strategic control from third sector agencies to the funding agencies within government. A case study of the Department of Human Services in Victoria illustrates the implications of this shift for third sector agencies and reveals the future directions of government-third sector relations in Australia.  相似文献   

20.
This comment assesses how age, period and cohort (APC) effects are modelled with panel data in the social sciences. It considers variations on a 2-level multilevel model which has been used to show apparent evidence for simultaneous APC effects. We show that such an interpretation is often misleading, and that the formulation and interpretation of these models requires a better understanding of APC effects and the exact collinearity present between them. This interpretation must draw on theory to justify the claims that are made. By comparing two papers which over-interpret such a model, and another that in our view interprets it appropriately, we outline best practice for researchers aiming to use panel datasets to find APC effects, with an understanding that it is impossible for any statistical model to find and separate all three effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号