首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We present some general results on Fisher information (FI) contained in upper (or lower) record values and associated record times generated from a sequence of i.i.d. continuous variables. For the record data obtained from a random sample of fixed size, we establish an interesting relationship between its FI content and the FI in the data consisting of sequential maxima. We also consider the record data from an inverse sampling plan (Samaniego and Whitaker, 1986). We apply the general results to evaluate the FI in upper as well as lower records data from the exponential distribution for both sampling plans. Further, we discuss the implication of our results to statistical inference from these record data. Received: December 2001 Acknowledgements. This research was supported by Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) grants 7990089 and 1010222 of Chile. We would like to thank the Department of Statistics at the University of Concepción for its hospitality during the stay of H. N. Nagaraja in Chile in March of 2000, when the initial work was done. We are grateful to the two referees for various comments that let to improvements in the paper.  相似文献   

2.
J. Ahmadi  N. R. Arghami 《Metrika》2001,53(3):195-206
In this article, we establish some general results concerning the comparison of the amount of the Fisher information contained in n record values with the Fisher information contained in n iid observations from the original distribution. Some common distributions are classified according to this criterion. We also propose some methods of estimation based on record values. The results may be of interest in some life testing problems. Received: September 1999  相似文献   

3.
Open Social Innovation (OSI) involves the collaboration of multiple stakeholders to generate ideas, and develop and scale solutions to make progress on societal challenges. In an OSI project, stakeholders share data and information, utilize it to better understand a problem, and combine data with digital technologies to create digitally-enabled solutions. Consequently, data governance is essential for orchestrating an OSI project to facilitate the coordination of innovation. Because OSI brings multiple stakeholders together, and each stakeholder participates voluntarily, data governance in OSI has a distributed nature. In this essay we put forward a framework consisting of three dimensions allowing an inquiry into the effectiveness of such distributed data governance: (1) openness (i.e., freely sharing data and information), (2) accountability (i.e., willingness to be held responsible and provide justifications for one's conduct) and (3) power (i.e., resourceful actors' ability to impact other stakeholder's actions). We apply this framework to reflect on the OSI project #WirVsVirus (“We versus virus” in English), to illustrate the challenges in organizing effective distributed data governance, and derive implications for research and practice.  相似文献   

4.
We obtain an upper bound for a measure of the performance of the least squares predictor of the jth record of a sequence of continuous i.i.d random variables as a function of the ith record. We show also that such bound is attainable, except for location and scale parameters, by exponential distributions. This work was supported in part by Ministerio de Educación y Ciencia through Grants PB96-1416-CO2-02 and HA 1997-0123.  相似文献   

5.
Much research studies US inflation history with a trend‐cycle model with unobserved components, where the trend may be viewed as the Fed's evolving inflation target or long‐horizon expected inflation. We provide a novel way to measure the slowly evolving trend and the cycle (or inflation gap), by combining inflation predictions from the Survey of Professional Forecasters (SPF) with realized inflation. The SPF forecasts may be treated either as rational expectations (RE) or updating according to a sticky information (SI) law of motion. We estimate RE and SI state‐space models with stochastic volatility on samples of consumer price index and gross national product/gross domestic product deflator inflation and the associated SPF inflation predictions using a particle Metropolis–Markov chain Monte Carlo sampler. The trend converges to 2% and its volatility declines over time—two tendencies largely complete by the late 1990s.  相似文献   

6.
朱创录  钟东 《价值工程》2011,30(33):137-138
为了提高医院电子病历信息的共享,使用Web Services构建以临床文件架构(Clinical Document Architecture,CDA)为标准的电子病历交换系统,CDA文件提供了标准化医疗文件,并可在医疗院所之间进行病历交换,它使用XML定义了完整的医疗信息。通过Web Services的技术可使HTTP协议在分布式架构中达到不同平台的系统整合,改善现有医疗信息环境封闭性架构的限制与缺点,构建出一个可移植性、可读性、交换性的电子病历交换系统以确保病历的完整性并提升医疗系统质量。  相似文献   

7.
The dynamic behavior of the term structure of interest rates is difficult to replicate with models, and even models with a proven track record of empirical performance have underperformed since the early 2000s. On the other hand, survey expectations can accurately predict yields, but they are typically not available for all maturities and/or forecast horizons. We show how survey expectations can be exploited to improve the accuracy of yield curve forecasts given by a base model. We do so by employing a flexible exponential tilting method that anchors the model forecasts to the survey expectations, and we develop a test to guide the choice of the anchoring points. The method implicitly incorporates into yield curve forecasts any information that survey participants have access to—such as information about the current state of the economy or forward‐looking information contained in monetary policy announcements—without the need to explicitly model it. We document that anchoring delivers large and significant gains in forecast accuracy relative to the class of models that are widely adopted by financial and policy institutions for forecasting the term structure of interest rates.  相似文献   

8.
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k?1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k?1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k?1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.  相似文献   

9.
刘兴淮  温丛剑  徐燕梅 《价值工程》2011,30(30):297-298
本文介绍了病案首页部分数据如何在HIS系统中获取,在使用过程中出现新问题时如何解决的方法,并且设计新的病案首页的数据接口,实现系统无差错的联接,从而提高医疗质量和工作效率,实现数据信息科学性的网络共享。  相似文献   

10.
Coordination – or the information exchange among physicians and hospital staff – is necessary for desirable patient outcomes in healthcare delivery. However, coordination is difficult because healthcare delivery processes are information intensive, complex and require interactions of hospitals with autonomous physicians working in multiple operational systems (i.e. multiple hospitals). We examine how three important variables distinctive of the healthcare operations context – use of IT for dissemination of test results (ITDR) (i.e. electronic health records systems) by physicians and hospital staff, social interaction ties among them, and physician employment – influence information exchange and patient perceptions of their care. Drawing from the literature on process inter-dependencies and coordination, vertical integration and social exchange, we develop and test research hypotheses linking ITDR, social interaction ties and physician employment to information exchange relationship, and information exchange relationship to provider–patient communication. Using a paired sample of primary survey data and secondary archival data from CMS HCAHPS for 173 hospitals in the USA, we find that increased information exchange relationship drives provider–patient communication, and increased social interaction ties drives information exchange relationship. Social interaction ties fully mediates the relationship between ITDR and information exchange relationship. Physician employment amplifies the link between ITDR and social interaction ties, but does not have an effect on the link between ITDR and information exchange. We do not find a direct relationship between ITDR, and information exchange relationship or provider–patient communication.  相似文献   

11.
Statistical offices are responsible for publishing accurate statistical information about many different aspects of society. This task is complicated considerably by the fact that data collected by statistical offices generally contain errors. These errors have to be corrected before reliable statistical information can be published. This correction process is referred to as statistical data editing. Traditionally, data editing was mainly an interactive activity with the aim to correct all data in every detail. For that reason the data editing process was both expensive and time-consuming. To improve the efficiency of the editing process it can be partly automated. One often divides the statistical data editing process into the error localisation step and the imputation step. In this article we restrict ourselves to discussing the former step, and provide an assessment, based on personal experience, of several selected algorithms for automatically solving the error localisation problem for numerical (continuous) data. Our article can be seen as an extension of the overview article by Liepins, Garfinkel & Kunnathur (1982). All algorithms we discuss are based on the (generalised) Fellegi–Holt paradigm that says that the data of a record should be made to satisfy all edits by changing the fewest possible (weighted) number of fields. The error localisation problem may have several optimal solutions for a record. In contrast to what is common in the literature, most of the algorithms we describe aim to find all optimal solutions rather than just one. As numerical data mostly occur in business surveys, the described algorithms are mainly suitable for business surveys and less so for social surveys. For four algorithms we compare the computing times on six realistic data sets as well as their complexity.  相似文献   

12.
梁敏 《价值工程》2011,30(30):135-135
针对病案统计系统中所需病案首页数据的要求,本文介绍在EMR系统中提取此数据设计的接口方法及分析思路,实现电子病历系统与病案统计系统病人医疗数据的信息共建共享。  相似文献   

13.
Using proprietary data that rate corporate social responsibility (CSR) disclosures of firms in 21 countries, this study examines how the strength of nation-level institutions affects the extent of CSR disclosures. We then examine the valuation implications of CSR disclosures and consider how the relation between CSR disclosures and firm value varies across countries. In contrast to prior studies, we separate CSR disclosures into an expected and unexpected portion where the unexpected portion is a proxy for the incremental information contained in CSR disclosures. We observe a positive relation between unexpected CSR disclosure and firm value measured by Tobin's Q. We also find that, while countries with strong nation-level institutions promote more CSR disclosures, the valuation of a unit increase in unexpected CSR disclosures is higher when nation-level institutions are weak.  相似文献   

14.
We propose a simple and flexible framework that allows for a comprehensive analysis of tail interdependence in high dimensions. We use co‐exceedances to capture the structure of the dependence in the tails and, relying on the concept of multi‐information, define the coefficient of tail interdependence. Within this framework, we develop statistical tests of (i) independence in the tails, (ii) goodness‐of‐fit of the tail interdependence structure of a hypothesized model with the one observed in the data, and (iii) dependence symmetry between any two tails. We present an analysis of tail interdependence among 250 constituents of the S&P 250 index.  相似文献   

15.
This paper provides a review of common statistical disclosure control (SDC) methods implemented at statistical agencies for standard tabular outputs containing whole population counts from a census (either enumerated or based on a register). These methods include record swapping on the microdata prior to its tabulation and rounding of entries in the tables after they are produced. The approach for assessing SDC methods is based on a disclosure risk–data utility framework and the need to find a balance between managing disclosure risk while maximizing the amount of information that can be released to users and ensuring high quality outputs. To carry out the analysis, quantitative measures of disclosure risk and data utility are defined and methods compared. Conclusions from the analysis show that record swapping as a sole SDC method leaves high probabilities of disclosure risk. Targeted record swapping lowers the disclosure risk, but there is more distortion of distributions. Small cell adjustments (rounding) give protection to census tables by eliminating small cells but only one set of variables and geographies can be disseminated in order to avoid disclosure by differencing nested tables. Full random rounding offers more protection against disclosure by differencing, but margins are typically rounded separately from the internal cells and tables are not additive. Rounding procedures protect against the perception of disclosure risk compared to record swapping since no small cells appear in the tables. Combining rounding with record swapping raises the level of protection but increases the loss of utility to census tabular outputs. For some statistical analysis, the combination of record swapping and rounding balances to some degree opposing effects that the methods have on the utility of the tables.  相似文献   

16.
Accounting for the uncertainty in real-time perceptions of the state of the economy is believed to be critical for monetary policy analysis. We investigate this claim through the lens of a New Keynesian model with optimal discretionary policy and partial information. Structural parameters are estimated using a data set that includes real-time and ex post revised observations spanning 1965–2010. In comparison to a standard complete information model, our estimates reveal that under partial information: (i) the Federal Reserve demonstrates a significant concern for stabilizing the output gap after 1979, (ii) the model׳s fit with revised data improves, and (iii) the tension between optimal and observed policy is smaller.  相似文献   

17.
This research considers cross-national diffusion of international human resource management (IHRM) ideas and practices by applying an emergent frame of sociological conceptualisation – ‘social institutionalism’ (SI). We look at cultural filters to patterns of diffusion, assimilation and adoption of IHRM, using Romania as a case study. The paper considers the former Communist system of employment relations, suggesting that through institutionalisation former ways of thinking have a residual influence on definitions and practice of people management in post-Communist Eastern Europe. The paper provides a new perspective on HRM by discussing the value of SI as a general model for understanding cross-cultural receptivity to HR ideas, sensitising the HR practitioner and academic to institutionalised culture as a historical legacy influencing absorption of international management ideas.  相似文献   

18.
The paper provides evidence on the extent to which inflation expectations generated by a standard Christiano et al. (2005)/Smets and Wouters (2003)-type DSGE model are in line with what observed in the data. We consider three variants of this model that differ in terms of the behavior of, and the public's information on, the central banks' inflation target, allegedly a key determinant of inflation expectations. We find that (i) time-variation in the inflation target is needed to capture the evolution of expectations during the post-Volcker period; (ii) the variant where agents have Imperfect Information is strongly rejected by the data; (iii) inflation expectations appear to contain information that is not present in the other series used in estimation, and (iv) none of the models fully capture the dynamics of this variable.  相似文献   

19.
Computerised Record Linkage methods help us combine multiple data sets from different sources when a single data set with all necessary information is unavailable or when data collection on additional variables is time consuming and extremely costly. Linkage errors are inevitable in the linked data set because of the unavailability of error‐free unique identifiers. A small amount of linkage errors can lead to substantial bias and increased variability in estimating parameters of a statistical model. In this paper, we propose a unified theory for statistical analysis with linked data. Our proposed method, unlike the ones available for secondary data analysis of linked data, exploits record linkage process data as an alternative to taking a costly sample to evaluate error rates from the record linkage procedure. A jackknife method is introduced to estimate bias, covariance matrix and mean squared error of our proposed estimators. Simulation results are presented to evaluate the performance of the proposed estimators that account for linkage errors.  相似文献   

20.
Web 2.0 has brought innovations in digital government, namely, government 2.0. Social media, as one part of Web 2.0, could potentially support fuller participation and public interaction. Social media enjoys a very high level of acceptance by individual users and government agencies around the world. Web 2.0 and social media usage in the public sector still needs to be tested from the perspective of not only the government but also the community as the recipient of services. Therefore, this study aims to answer the following research questions: How effective has government 2.0 implementation been in Indonesia? Is there a correlation between e-government management and government 2.0 implementation? We adopted the sophistication index (SI) by Bonson et al. (2012) [1] to answer the first research questions. The SI examined the presence of Web 2.0 features and social media applications on these government institutions’ websites. As to answer the second research question, we conducted parametric statistical tests to assess how e-government implementation, based on the Indonesian E-Government Rating (PEGI) score, has influenced the effectiveness of government 2.0 use by government institutions in Indonesia. We observed the websites and social media accounts of 116 Indonesian government institutions. According to the evaluation of Web 2.0 and social media use, the average SI score is 42%. These results indicate that, in general, government institutions in Indonesia have used Web 2.0 and social media features, although the adoption rate has not been equal. The correlations between the PEGI scores and SI values also suggest a positive relationship between the effectiveness of e-government implementation and the effectiveness of government institutions. Therefore, government institutions that have been effective in implementing e-government have also been effective in implementing government 2.0.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号