首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 453 毫秒
1.
基于离散余弦变换的水印嵌入技术存在变换过程中浮点数-整数转换而引入的舍入误差问题。文章采用微分演化算法修正了这类舍入误差,提高了基于离散余弦变换的脆弱数字水印的性能。同时,对该算法进行了介绍和描述,比较了它的性能。最后,就提取出来的水印的标准互相关相似度、加入水印后的影像的峰值信噪比,以及算法结果的质量和收敛率作了详细的比较。  相似文献   

2.
基于离散余弦变换的水印嵌入技术存在变换过程中浮点数-整数转换而引入的舍入误差问题。文章采用微分演化算法修正了这类舍入误差,提高了基于离散余弦变换的脆弱数字水印的性能。同时,对该算法进行了介绍和描述,比较了它的性能。最后,就提取出来的水印的标准互相关相似度、加入水印后的影像的峰值信噪比,以及算法结果的质量和收敛率作了详细的比较。  相似文献   

3.
本文提出了一种在数字音频信号中嵌入水印的算法。该算法嵌入的水印不再是传统的伪随机形式,而是一幅视觉可辨的二值图像。算法先将作为水印的二维图像降维成一维序列,然后对数字音频信号作分段离散余弦变换,最后在具体的水印嵌入过程中根据音频载体自身的特点动态调整水印嵌入强度,并在离散余弦变换域内通过修改数字音频信号的中频系数的方法实现水印的自适应嵌入。实验结果表明,该算法对重采样、重量化、有损压缩等攻击均具有很好的鲁棒性。  相似文献   

4.
《价值工程》2013,(24):174-175
测量数据的有效数字取位时存在舍入问题,合理的舍入法则会减小误差,同时本文根据测量数据量大的特点开发了批量处理的程序。  相似文献   

5.
《价值工程》2013,(4):194-195
对视频进行MPEG-4编码过程中,运动估计和离散余弦变换DCT两部分运行量最大,块匹配法是运动估计常用的方法,该方法是在确定两个子块匹配法则的基础上,寻找搜索方法,要求该搜索方法的计算量最小,重复点计算是传统菱形搜索算法存在的主要问题,本文对菱形搜索算法进行了改进,改进后的菱形搜索算法可以在相同步骤下使搜索点数减少近50%,使算法运算量的运算量大大减少。  相似文献   

6.
基于离散元法,从摩擦系数(墙体与颗粒之间)、 颗粒粒径两个影响因素出发,分析粮仓效应的影响因素,并对粮仓效应离散元模拟过程中参数的取值提出了建议.  相似文献   

7.
本文研究了彩色图像数字水印技术,给出了一种混沌序列的自适应嵌入水印算法。算法先嵌入鲁棒性强的盲水印用来进行版权保护,再嵌入鲁棒性极弱的脆弱水印用来确认图像是否被修改。  相似文献   

8.
本文首先说明了数字水印技术是数字化时代信息隐藏和信息安全保护的一种新型技术.其次在介绍数字水印的嵌入和恢复原理的基础上介绍了几类典型的图像数字水印算法,包括强度不同的水印算法;嵌入域不同的水印算法;基于图像相关性和HVS的算法;基于不可逆性的水印算法.  相似文献   

9.
陈燕 《价值工程》2013,(36):230-231
基于DCT域下的水印算法相对于空域中水印算法具有更好的稳定性,更大的容量,以及更好的隐蔽性,同时在借助人类的感知模型的情况下能设计出具有较好保真度的水印系统。本论文对DM-QIM(Quantization Index Modulation)水印方案进行了系统的研究,介绍了这种数字水印算法的原理及模型,探究了数字水印的嵌入和提取方案,最后对实验结果做了分析和总结。  相似文献   

10.
任竞颖 《价值工程》2011,30(31):96-97
提出了一种基于改进的小波变换和模糊核聚类的纹理分割方法。该方法首先用改进的离散小波变换进行纹理特征提取。然后用模糊核聚类方法对特征空间的每个像素进行聚类以实现对纹理的分割。实验结果表明所提算法有很好的分割结果。  相似文献   

11.
针对视频水印常见的攻击和视频压缩攻击,文章提出了一种时间轴小波域的视频水印算法。该算法选取每个场景中连续的8帧视频嵌入水印,让8帧视频图像进行3重时间轴小波变换得到一个低频帧,通过自适应的方式将水印嵌入低频帧的DCT中低频系数中,从而有效地保证了水印的抗攻击性。实验结果表明,该视频水印系统在不可见的同时具有很强的鲁棒性和安全性。  相似文献   

12.
As more and more wireless sensor nodes and networks are employed to acquire and transmit the state information of power equipment in smart grid, we are in urgent need of some viable security solutions to ensure secure smart grid communications. Conventional information security solutions, such as encryption/decryption, digital signature and so forth, are not applicable to wireless sensor networks in smart grid any longer, where bulk messages need to be exchanged continuously. The reason is that these cryptographic solutions will account for a large portion of the extremely limited resources on sensor nodes. In this article, a security solution based on digital watermarking is adopted to achieve the secure communications for wireless sensor networks in smart grid by data and entity authentications at a low cost of operation. Our solution consists of a secure framework of digital watermarking, and two digital watermarking algorithms based on alternating electric current and time window, respectively. Both watermarking algorithms are composed of watermark generation, embedding and detection. The simulation experiments are provided to verify the correctness and practicability of our watermarking algorithms. Additionally, a new cloud-based architecture for the information integration of smart grid is proposed on the basis of our security solutions.  相似文献   

13.
针对视频水印常见的攻击和视频压缩攻击,文章提出了一种时间轴小波域的视频水印算法。该算法选取每个场景中连续的8帧视频嵌入水印,让8帧视频图像进行3重时间轴小波变换得到一个低频帧,通过自适应的方式将水印嵌入低频帧的DCT中低频系数中,从而有效地保证了水印的抗攻击性。实验结果表明,该视频水印系统在不可见的同时具有很强的鲁棒性和安全性。  相似文献   

14.
Many empirical applications of regression discontinuity (RD) models use a running variable that is rounded and hence discrete, e.g. age in years, or birth weight in ounces. This paper shows that standard RD estimation using a rounded discrete running variable leads to inconsistent estimates of treatment effects, even when the true functional form relating the outcome and the running variable is known and is correctly specified. This paper provides simple formulas to correct for this discretization bias. The proposed approach does not require instrumental variables, but instead uses information regarding the distribution of rounding errors, which is easily obtained and often close to uniform. Bounds can be obtained without knowing the distribution of the rounding error. The proposed approach is applied to estimate the effect of Medicare on insurance coverage in the USA, and to investigate the retirement‐consumption puzzle in China, utilizing the Chinese mandatory retirement policy. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Resampling methods are widely studied and increasingly employed in applied research and practice. When dealing with complex sampling designs, common resampling techniques require adjusting noninteger sampling weights in order to construct the so called “pseudopopulation” in order to perform the actual resampling. The practice of rounding, however, has been empirically shown to be harmful under general designs. In this paper, we present asymptotic results concerning, in particular, the practice of rounding resampling weights to the nearest integer, an approach that is commonly adopted by virtue of its reduced computational burden, as opposed to randomization‐based alternatives. We prove that such approach leads to nonconsistent estimation of the distribution function of the survey variable; we provide empirical evidence of the practical consequences of the nonconsistency when the point estimation of the variance of complex estimators is of interest.  相似文献   

16.
This paper provides a review of common statistical disclosure control (SDC) methods implemented at statistical agencies for standard tabular outputs containing whole population counts from a census (either enumerated or based on a register). These methods include record swapping on the microdata prior to its tabulation and rounding of entries in the tables after they are produced. The approach for assessing SDC methods is based on a disclosure risk–data utility framework and the need to find a balance between managing disclosure risk while maximizing the amount of information that can be released to users and ensuring high quality outputs. To carry out the analysis, quantitative measures of disclosure risk and data utility are defined and methods compared. Conclusions from the analysis show that record swapping as a sole SDC method leaves high probabilities of disclosure risk. Targeted record swapping lowers the disclosure risk, but there is more distortion of distributions. Small cell adjustments (rounding) give protection to census tables by eliminating small cells but only one set of variables and geographies can be disseminated in order to avoid disclosure by differencing nested tables. Full random rounding offers more protection against disclosure by differencing, but margins are typically rounded separately from the internal cells and tables are not additive. Rounding procedures protect against the perception of disclosure risk compared to record swapping since no small cells appear in the tables. Combining rounding with record swapping raises the level of protection but increases the loss of utility to census tabular outputs. For some statistical analysis, the combination of record swapping and rounding balances to some degree opposing effects that the methods have on the utility of the tables.  相似文献   

17.
Abstract

This paper describes improvements on methods developed by Burgstahler and Dichev (1997, Earnings management to avoid earnings decreases and losses, Journal of Accounting and Economics, 24(1), pp. 99–126) and Bollen and Pool (2009, Do hedge fund managers misreport returns? Evidence from the pooled distribution, Journal of Finance, 64(5), pp. 2257–2288) to test for earnings management by identifying discontinuities in distributions of scaled earnings or earnings forecast errors. While existing methods use preselected bandwidths for kernel density estimation and histogram construction, the proposed test procedure addresses the key problem of bandwidth selection by using a bootstrap test to endogenise the selection step. The main advantage offered by the bootstrap procedure over prior methods is that it provides a reference distribution that cannot be globally distinguished from the empirical distribution rather than assuming a correct reference distribution. This procedure limits the researcher's degrees of freedom and offers a simple procedure to find and test a local discontinuity. I apply the bootstrap density estimation to earnings, earnings changes, and earnings forecast errors in US firms over the period 1976–2010. Significance levels found in earlier studies are greatly reduced, often to insignificant values. Discontinuities cannot be detected in analysts’ forecast errors, while such findings of discontinuities in earlier research can be explained by a simple rounding mechanism. Earnings data show a large drop in loss aversion after 2003 that cannot be detected in changes of earnings.  相似文献   

18.
Statistical offices are responsible for publishing accurate statistical information about many different aspects of society. This task is complicated considerably by the fact that data collected by statistical offices generally contain errors. These errors have to be corrected before reliable statistical information can be published. This correction process is referred to as statistical data editing. Traditionally, data editing was mainly an interactive activity with the aim to correct all data in every detail. For that reason the data editing process was both expensive and time-consuming. To improve the efficiency of the editing process it can be partly automated. One often divides the statistical data editing process into the error localisation step and the imputation step. In this article we restrict ourselves to discussing the former step, and provide an assessment, based on personal experience, of several selected algorithms for automatically solving the error localisation problem for numerical (continuous) data. Our article can be seen as an extension of the overview article by Liepins, Garfinkel & Kunnathur (1982). All algorithms we discuss are based on the (generalised) Fellegi–Holt paradigm that says that the data of a record should be made to satisfy all edits by changing the fewest possible (weighted) number of fields. The error localisation problem may have several optimal solutions for a record. In contrast to what is common in the literature, most of the algorithms we describe aim to find all optimal solutions rather than just one. As numerical data mostly occur in business surveys, the described algorithms are mainly suitable for business surveys and less so for social surveys. For four algorithms we compare the computing times on six realistic data sets as well as their complexity.  相似文献   

19.
Probabilistic record linkage is the act of bringing together records that are believed to belong to the same unit (e.g., person or business) from two or more files. It is a common way to enhance dimensions such as time and breadth or depth of detail. Probabilistic record linkage is not an error-free process and link records that do not belong to the same unit. Naively treating such a linked file as if it is linked without errors can lead to biased inferences. This paper develops a method of making inference with estimating equations when records are linked using algorithms that are widely used in practice. Previous methods for dealing with this problem cannot accommodate such linking algorithms. This paper develops a parametric bootstrap approach to inference in which each bootstrap replicate involves applying the said linking algorithm. This paper demonstrates the effectiveness of the method in simulations and in real applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号