首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
This note is devoted to an analysis of the so-called peeling algorithm in wavelet denoising. Assuming that the wavelet coefficients of the useful signal are modeled by generalized Gaussian random variables and its noisy part by independent Gaussian variables, we compute a critical thresholding constant for the algorithm, which depends on the shape parameter of the generalized Gaussian distribution. We also quantify the optimal number of steps which have to be performed, and analyze the convergence of the algorithm. Several implementations are tested against classical wavelet denoising procedures on benchmark and simulated biological signals.  相似文献   

2.
Semi-parametric estimation methods of the long-memory exponent of a time series have been studied in several papers, some applied, others theoretical, some using Fourier methods, others using a wavelet-based technique. In this paper, we compare the Fourier and wavelet approaches to the local regression method and to the local Whittle method. We provide an overview of these methods, describe what has been done and indicate the available results and the conditions under which they hold. We discuss their relative strengths and weaknesses both from a practical and a theoretical perspective. We also include a simulation-based comparison. The software written to support this work is available on demand and we illustrate its use at the end of the paper.  相似文献   

3.
Wavelet shrinkage and thresholding methods constitute a powerful way to carry out signal denoising, especially when the underlying signal has a sparse wavelet representation. They are computationally fast, and automatically adapt to the smoothness of the signal to be estimated. Nearly minimax properties for simple threshold estimators over a large class of function spaces and for a wide range of loss functions were established in a series of papers by Donoho and Johnstone. The notion behind these wavelet methods is that the unknown function is well approximated by a function with a relatively small proportion of nonzero wavelet coefficients. In this paper, we propose a framework in which this notion of sparseness can be naturally expressed by a Bayesian model for the wavelet coefficients of the underlying signal. Our Bayesian formulation is grounded on the empirical observation that the wavelet coefficients can be summarized adequately by exponential power prior distributions and allows us to establish close connections between wavelet thresholding techniques and Maximum A Posteriori estimation for two classes of noise distributions including heavy–tailed noises. We prove that a great variety of thresholding rules are derived from these MAP criteria. Simulation examples are presented to substantiate the proposed approach.  相似文献   

4.
席国兴 《价值工程》2012,31(19):15-17
以大庆长垣杏树岗地区高三组层序地层划分为例,分析了小波变换在高分辨率层序划分中的应用。利用对自然伽马曲线进行Morlet小波变换,将测井曲线信号转换为深度与尺度域的变化关系,得到不同尺度上的小波系数曲线,然后建立最佳尺度因子下的小波系数曲线的周期性振荡特征与各级层序界面的对应关系,最终将最佳尺度因子下的小波系数曲线用于层序的识别与划分。同时,结合地震资料和储层反演等资料验证,实践证明,最佳尺度因子下的小波系数曲线能准确识别出各级层序的界面,因此,利用小波变换进行层序地层划分为高分辨率层序地层学研究了提供一条新的途径。  相似文献   

5.
Optimal exact designs are notoriously hard to study and only a few of them are known for polynomial models. Using recently obtained optimal exact designs (I mhof , 1997), we show that the efficiency of the frequently used rounded optimal approximate designs can be sensitive if the sample size is small. For some criteria, the efficiency of the rounded optimal approximate design can vary by as much as 25% when the sample size is changed by one unit. The paper also discusses lower efficiency bounds and shows that they are sometimes the best possible bounds for the rounded optimal approximate designs.  相似文献   

6.
针对金融时间序列非平稳性、非线性的特点,本文采用小波分析与人工神经网络相结合的方法,对沪深A300收盘价进行分析和预测。结果表明,小波神经网络有较强的预测能力,能达到预期效果。为了验证该方法的预测能力,进一步将时间序列数据多步分段,全方位地进行预测,并与小波-ARIMA模型、BP神经网络预测方法进行比较,体现了小波神经网络的预测优势。  相似文献   

7.
Although widely used in many areas of applied sciences, wavelet analysis has not fully entered the economic discipline yet. In this article we apply wavelet analysis to one of the most investigated relationships is in empirical macroeconomics: the relationship between wage inflation and unemployment. Using US postwar data we find a frequency‐dependent relationship of a sort that is consistent with Phillips’ original insights. It also turns out that this relationship is remarkably stable over the 1948–93 period, but not in the aftermath, as a consequence of a process of adaption of the wage formation process to a low inflation environment.  相似文献   

8.
Using methods based on wavelets and aggregate series, long memory in the absolute daily returns, squared daily returns, and log squared daily returns of the S&P 500 Index are investigated. First, we estimate the long memory parameter in each series using a method based on the discrete wavelet transform. For each series, the variance method and the absolute value method based on aggregate series are then employed to investigate long memory. Our findings suggest that these methods provide evidence of long memory in the volatility of the S&P 500 Index. Our esteemed colleague, Robert DiSario, passed away on December 31, 2005.  相似文献   

9.
杨国梁 《价值工程》2012,31(31):215-216
小波变换应用于图像去噪处理方面得到广泛的应用,但其自身也存在着一些缺点和不易控制因素,大大地限制了其图像去噪能力和应用的范围。而四叉树复小波变换很好地改善了这一缺点,具有较好的方向选择性和平移不变性,并且容易实现完全重构。本文通过对常见的图像去噪方法,对其原理进行解析,再讲解基于四叉树复小波的自适应图像去噪的优点和发展趋势,让大家对现代复小波自适应图像去噪有一个清晰的认识。  相似文献   

10.
家电制造商的废旧家电回收处理模式决策研究   总被引:2,自引:0,他引:2  
在分析废旧家电回收处理模式的基础上,考虑废旧家电的梯级利用,并假设家电产品需求函数为非线性情况下,建立不同回收模式的成本收益分析模型,计算制造商的最优收益,并进行回收模式决策。  相似文献   

11.
齐勇智 《价值工程》2011,30(23):150-150
运行电缆网时域反射信号检测与预处理技术研究。应用小波分析变换电缆网时域反射信号的奇异点进行检测与预处理误差较大,采用小波分析的反射波消噪技术提出一种小波分析最优分解层数的自适应确定函数算法。  相似文献   

12.
Complex systems that are required to perform very reliably are often designed to be “fault-tolerant,” so that they can function even though some component parts have failed. Often fault-tolerance is achieved through redundancy, involving the use of extra components. One prevalent redundant component configuration is the m-out-of-n system, where at least m of n identical and independent components must function for the system to function adequately.Often machines containing m-out-of-n systems are scheduled for periodic overhauls, during which all failed components are replaced, in order to renew the machine's reliability. Periodic overhauls are appropriate when repair of component failures as they occur is impossible or very costly. This will often be the case for machines which are sent on “missions” during which they are unavailable for repair. Examples of such machines include computerized control systems on space vehicles, military and commercial aircraft, and submarines.An interesting inventory problem arises when periodic overhauls are scheduled. How many spare parts should be stocked at the maintenance center in order to meet demands? Complex electronic equipment is rarely scrapped when it fails. Instead, it is sent to a repair shop, from which it eventually returns to the maintenance center to be used as a spare. A Markov model of spares availability at such a maintenance center is developed in this article. Steady-state probabilities are used to determine the initial spares inventory that minimizes total shortage cost and inventory holding cost. The optimal initial spares inventory will depend upon many factors, including the values of m and n, component failure rate, repair rate, time between overhauls, and the shortage and holding costs.In a recent paper, Lawrence and Schaefer [4] determined the optimal maintenance center inventories for fault-tolerant repairable systems. They found optimal maintenance center inventories for machines containing several sets of redundant systems under a budget constraint on total inventory investment. This article extends that work in several important ways. First, we relax the assumption that the parts have constant failure rates. In this model, component failure rates increase as the parts age. Second, we determine the optimal preventive maintenance policy, calculating the optimal age at which a part should be replaced even if it has not failed because the probability of subsequent failure has become unacceptably high. Third, we relax the earlier assumption that component repair times are independent, identically distributed random variables. In this article we allow congestion to develop at the repair shop, making repair times longer when there are many items requiring repair. Fourth, we introduce a more efficient solution method, marginal analysis, as an alternative to dynamic programming, which was used in the earlier paper. Fifth, we modify the model in order to deal with an alternative objective of maximizing the job-completion rate.In this article, the notation and assumptions of the earlier model are reviewed. The requisite changes in the model development and solution in order to extend the model are described. Several illustrative examples are included.  相似文献   

13.
The study forecast intraday portfolio VaR and CVaR using high frequency data of three pairs of stock price indices taken from three different markets. For each pair we specify both the marginal models for the individual return series and a joint model for the dependence between the paired series. We have used CGARCH-EVT-Copula model, and compared its forecasting performance with three other competing models. Backtesting evidence shows that the CGARCH-EVT-Copula type model performs relatively better than other models. Once the best performing model is identified for each pair, we develop an optimal portfolio selection model for each market, separately.  相似文献   

14.
张圣亮  汪峰 《价值工程》2009,28(12):6-8
顾客对一家服务组织的质量感知及其满意度不仅受服务接触点的影响,而且受接触层次布局和衔接等的影响。基于此,服务组织要提升顾客整体感知服务质量和增加顾客满意度,就必须优化服务接触链,包括增加电话和互联网接触、避免过细的服务分工和增设综合服务提供点、缩小服务接触点空间范围、优化服务流程和路径、设置清晰和完善的服务标识、主动引导和送达顾客、平衡各服务接触点的服务能力和服务接触点之间信息共享等。  相似文献   

15.
郑俊艳 《价值工程》2012,31(5):140-141
本文将小波分析与支持向量回归结合应用于国际原油价格预测,通过小波多尺度分析方法将油价时间序列分解为长期趋势和随机扰动项,然后采用支持向量回归对分解后的油价长期趋势进行预测。油价长期趋势的预测采用多因素预测方法,主要考虑市场供需基本面、库存、经济、投机等因素对石油价格走势的影响,建立多输入单输出的支持向量回归模型。实证研究表明,支持向量回归模型具有较高的预测性能,对原油价格长期趋势预测中,该方法比回归方法的预测精度高。  相似文献   

16.
Practical estimation of multivariate densities using wavelet methods   总被引:2,自引:0,他引:2  
This paper describes a practical method for estimating multivariate densities using wavelets. As in kernel methods, wavelet methods depend on two types of parameters. On the one hand we have a functional parameter: the wavelet Ø (comparable to the kernel K ) and on the other hand we have a smoothing parameter: the resolution index (comparable to the bandwidth h ). Classically, we determine the resolution index with a cross-validation method. The advantage of wavelet methods compared to kernel methods is that we have a technique for choosing the wavelet Ø among a fixed family. Moreover, the wavelets method simplifies significantly both the theoretical and the practical computations.  相似文献   

17.
The well-developed ETS (ExponenTial Smoothing, or Error, Trend, Seasonality) method incorporates a family of exponential smoothing models in state space representation and is widely used for automatic forecasting. The existing ETS method uses information criteria for model selection by choosing an optimal model with the smallest information criterion among all models fitted to a given time series. The ETS method under such a model selection scheme suffers from computational complexity when applied to large-scale time series data. To tackle this issue, we propose an efficient approach to ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series. We provide a simulation study to show the model selection ability of the proposed approach on simulated data. We evaluate our approach on the widely used M4 forecasting competition dataset in terms of both point forecasts and prediction intervals. To demonstrate the practical value of our method, we showcase the performance improvements from our approach on a monthly hospital dataset.  相似文献   

18.
Abstract This paper is a survey of recent empirical work on financial constraints faced by firms. It is organized as a series of stylized results which mirror what is generally understood about the severity of financial constraints and the effects that they have upon firms. The review of the literature shows that (a) financial constraints are a widespread key concern for firms, hindering their ability to carry out their optimal investment and growth trajectories and (b) the severity of such constraints depends on institutional and firm specific characteristics, as well as on the nature of investment projects.  相似文献   

19.
《Journal of econometrics》2005,128(2):195-213
Tests of stationarity are routinely applied to highly autocorrelated time series. Following Kwiatkowski et al. (J. Econom. 54 (1992) 159), standard stationarity tests employ a rescaling by an estimator of the long-run variance of the (potentially) stationary series. This paper analytically investigates the size and power properties of such tests when the series are strongly autocorrelated in a local-to-unity asymptotic framework. It is shown that the behavior of the tests strongly depends on the long-run variance estimator employed, but is in general highly undesirable. Either the tests fail to control size even for strongly mean reverting series, or they are inconsistent against an integrated process and discriminate only poorly between stationary and integrated processes compared to optimal statistics.  相似文献   

20.
We consider univariate low‐frequency filters applicable in real‐time as a macroeconomic forecasting method. This amounts to targeting only low frequency fluctuations of the time series of interest. We show through simulations that such approach is warranted and, using US data, we confirm empirically that consistent gains in forecast accuracy can be obtained in comparison with a variety of other methods. There is an inherent arbitrariness in the choice of the cut‐off defining low and high frequencies, which calls for a careful characterization of the implied optimal (for forecasting) degree of smoothing of the key macroeconomic indicators we analyse. We document interesting patterns that emerge: for most variables the optimal choice amounts to disregarding fluctuations well below the standard business cycle cut‐off of 32 quarters while generally increasing with the forecast horizon; for inflation and variables related to housing this cut‐off lies around 32 quarters for all horizons, which is below the optimal level for federal government spending.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号