首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper provides a review of common statistical disclosure control (SDC) methods implemented at statistical agencies for standard tabular outputs containing whole population counts from a census (either enumerated or based on a register). These methods include record swapping on the microdata prior to its tabulation and rounding of entries in the tables after they are produced. The approach for assessing SDC methods is based on a disclosure risk–data utility framework and the need to find a balance between managing disclosure risk while maximizing the amount of information that can be released to users and ensuring high quality outputs. To carry out the analysis, quantitative measures of disclosure risk and data utility are defined and methods compared. Conclusions from the analysis show that record swapping as a sole SDC method leaves high probabilities of disclosure risk. Targeted record swapping lowers the disclosure risk, but there is more distortion of distributions. Small cell adjustments (rounding) give protection to census tables by eliminating small cells but only one set of variables and geographies can be disseminated in order to avoid disclosure by differencing nested tables. Full random rounding offers more protection against disclosure by differencing, but margins are typically rounded separately from the internal cells and tables are not additive. Rounding procedures protect against the perception of disclosure risk compared to record swapping since no small cells appear in the tables. Combining rounding with record swapping raises the level of protection but increases the loss of utility to census tabular outputs. For some statistical analysis, the combination of record swapping and rounding balances to some degree opposing effects that the methods have on the utility of the tables.  相似文献   

2.
Statistical offices are concerned with problems of protecting confidential information when publishing data in statistical tables. One method to avoid disclosure is the method of cell suppression, in which the values of the sensitive cells in the table are suppressed from publication. To prevent the values of the sensitive cells from being calculated from the tables totals, additional suppressions are necessary. The problem of minimizing the loss of information caused by the additional suppressions is a difficult optimization problem. We present and compare the performance of some heuristics for cell suppression for general three-dimensional tables.  相似文献   

3.
4.
Two definitions of statistical disclosure - identification disclosure and prediction disclosure - are compared. Identification disclosure implies prediction disclosure but not vice versa. It is argued, however, that if sampling takes place then cases where prediction disclosure occurs and identification disclosure does not either have very small probability or do not present disclosure problems different from those normally met in the release of aggregate statistics. Finally the estimation of population uniqueness using the Poisson-Gamma model is considered.  相似文献   

5.
Behavioral finance inherits the standard research methods in the finance, and it makes outstanding contribution to leading the experimental research into the field of finance. It uses the cognitive science as the method guide to expand its scientific research tools. On one hand, the intelligent development of the cognitive science provides an important and effective means of a tool for the behavioral finance research; on the other hand, precisely because behavioral finance theories of progress makes a certain extent, dependent on the modem cognitive science research results and thus the development of cognitive science, its results application and limitation will be confined to a direct impact on the development of behavioral finance. That is why we need an in-depth study of the new method of behavioral finance research and grasps their limitations.  相似文献   

6.
采用统计量的标准化方法,研究了均值(μ)的单侧限计量抽样检验的理论与方法(σ2未知),在方差σ2已知的条件下,前人已经给出严格理论,并依据该理论制定了GB8054表1;在方差未知的情况下,制定GB8054表2的依据是不严格的,故抽样方案也是近似的,利用t分布与非中心i分布给出了精确的抽样检验理论,为修订国家标准GBQD8054表2提供了方法  相似文献   

7.
We propose new lattice-based algorithms for option and bond pricing, which rely on computationally simple trees, i.e., trees with the number of nodes that grows at most linearly in the number of time intervals. Contrary to commonly used methods, the target diffusion is approximated directly, without having to transform the original process into a constant volatility process. The discrete approximating process converges to the target continuous process, and the proposed algorithms are shown to be efficient and accurate for pricing purposes.  相似文献   

8.
Industrial statisticians frequently face problems in their practice where adjustment of a manufacturing process is necessary. In this paper, a view of the origins and recent work in the area of statistical process adjustment (SPA) is provided. A discussion of some topics open for further research is also given including new problems in semiconductor manufacturing process control. The goal of the paper is to help display the SPA field as a research area with its own identity and content, and promote further interest in its development and application in the industry.  相似文献   

9.
10.
A voting rule is said to be stable if it always elects a fixed-size subset of candidates such that there is no outside candidate who is majority preferred to any candidate in this set whenever such a set exists. Such a set is called a Weak Condorcet Committee (WCC). Four stable rules have been proposed in the literature. In this paper, we propose two new stable rules. Since nothing is known about the properties of the stable rules, we evaluate all the identified stable rules on the basis of some appealing properties of voting rules. We show that they all satisfy the Pareto criterion and they are not monotonic. More, we show that every stable rule fails the reinforcement requirement.  相似文献   

11.
Statistical agencies have the responsibility to design data release strategies which will not violate pledges of nondisclosure either through intent or neglect. In addition to ethical and legal concerns, statistical offices must be mindful that violating pledges of confidentiality may undermine an agency's ability to collect data due to loss of public trust. Statistical organizations also have the obligation to make information available to a variety of individuals and institutions to allow for informed discussion from differing perspectives on a range of issues. However, it is through fine, accurate detail on a file that risks of disclosure arise. In this report we discuss strategies for controlling risk in the release of public use microdata files. The equivalence class structure of a microdata file is defined and we show how the classic entropy function can be employed on the equivalence class structure to provide a measure of relative risk.  相似文献   

12.
The Basel Capital Accord (pillar 3) states that disclosure of information (transparency) is essential to financial stability. This study analyzes, through inflation reports, the disclosure of information from the Central Bank of Brazil concerning the credit market. We consider credit risk and capital buffers as measures of financial stability in this analysis. Furthermore, in order to measure the perception of the monetary authority on the credit market, we built two indices based on the central bank’s communication on credit development. We performed a panel data analysis based on a sample of 125 banks for the period from June 1999 to September 2014 (7000 observations). The findings suggest that central bank communication regarding expectations concerning the credit market contributes to financial stability. Therefore, this kind of communication of central banks (about credit development) may constitute an important macroprudential tool to improve financial stability.  相似文献   

13.
In the context of a probabilistic voting model with dichotomous choice, we investigate the consequences of choosing among voting rules according to the maximin criterion. A voting rule is the minimum number of voters who vote favorably on a change from the status quo required for it to be adopted. We characterize the voting rules that satisfy the maximin criterion as a function of the distribution of voters’ probabilities to favor change from the status quo. We prove that there are at most two maximin voting rules, at least one is Pareto efficient and is often different to the simple majority rule. If a committee is formed only by “conservative voters” (i.e. voters who are more likely to prefer the status quo to change) then the maximin criterion recommends voting rules that require no more voters supporting change than the simple majority rule. If there are only “radical voters”, then this criterion recommends voting rules that require no less than half of the total number of votes.Received: June 2003, Accepted: September 2004, JEL Classification: D71Salvador Barberá, Carmen Beviá, Mirko Cardinale, Wioletta Dziuda, Joan Esteban, Mahmut Erdem, Bernard Grofman, Matthew Jackson, Kai Konrad, Raul Lopez, Jordi Massó, Hugh Mullan, Shmuel Nitzan, Ana Pires do Prado, Elisabeth Schulte, Arnold Urken and two anonymous referees provided helpful comments. Finally, I also acknowledge financial support from Capes, Brazilian Ministry of Education and Spanish Ministry of Science and Technology (Project BEC2002-02130).  相似文献   

14.
This paper considers three ratio estimators of the population mean using known correlation coefficient between the study and auxiliary variables in simple random sample when some sample observations are missing. The suggested estimators are compared with the estimators of Singh and Horn (Metrika 51:267–276, 2000), Singh and Deo (Stat Pap 44:555–579, 2003) and Kadilar and Cingi (Commun Stat Theory Methods 37:2226–2236, 2008). They are compared with other imputation estimators based on the mean or a ratio. It is found that the suggested estimators are approximately unbiased for the population mean. Also, it turns out that the suggested estimators perform well when compared with the other estimators considered in this study.  相似文献   

15.
导游小费制初探   总被引:2,自引:0,他引:2  
齐童  狄麟麟 《城市问题》2006,(3):98-101
介绍了"小费"的产生及发展,分析了它在我国的发展情况,并论证了实施小费制给旅游者、导游和旅行社带来的利益;认为在旅游业中实施小费制是可行的,提出了在实施小费制过程中所遇到的问题及解决措施.  相似文献   

16.
This article considers a new concept of social optimum for an economy populated by agents with heterogeneous discount factors. It is based upon an approach that constrains decision rules to be temporally consistent: these are stationary and unequivocally ruled by the state variable. For agents who differ only in their discount factors and have equal weights in the planner’s objective, the temporally-consistent optimal solution produces identical consumption for the agents at all time periods. In the long run, the capital stock is determined by a modified golden rule that corresponds to an average-like summation of all discount factors. The general argument is illustrated by various two-agent examples that allow for an explicit determination of the temporally consistent decision rules. Interestingly, this temporally consistent solution can be simply recovered from the characterization of a social planner’s problem with variable discounting and can also be decentralized as a competitive equilibrium through the use of various instruments.  相似文献   

17.
While giving subjective responses, a small fraction of respondents overstate or understate their true status. The extent of over (or under) evaluation and the proportions of such respondents differs from population to population. These biases alter the significance level and power of the commonly used tests. In this paper the authors determine the effects of these biases on the commonly used Z test, F test of Anova, and χ2 test for testing the equality of multinominal distributions.This paper was presented in the International Conference in Recent Advances in Survey Sampling, in honor of Professor J.N.K. Rao, held at Ottawa in July, 2002.  相似文献   

18.
A review of the organizational set-up of a national statistical office, its staffing levels and subjects covered has been described. Two groups of employees of a statistical office are considered with respect to the teaching of statistics, namely those already in employment and those expected to be employed by a statistical office. The Statistical Training Programme for Africa (STPA), under which the present study was undertaken, improvement and strengthening of statistical training programmes for employees or expected employees of a statistical office, are described including selected aspects of the programme. Teaching programmes for those currently in employment with the objective of improving their work performance are also described. Achievements and problems of the programme are given. In conclusion a new framework for revitalisation of teaching of statistics in Africa in the 1990s is mentioned.  相似文献   

19.
In the research project on data anonymity, the possibilities and difficulties of restoring the identity of respondents, whose data have been anonymized, were tested in realistic simulations. In this paper the results of an application of a) a matching procedure and of b) a method based on discriminate analysis are reported. In the experiments carried out, empirical data of a handbook about German scientists and scholars and the German microcensus were used. A check of the results by an independent data-trustee demonstrated that a real intruder has more difficulties achieving an identification than is frequently assumed.  相似文献   

20.
We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号