首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present some general results on Fisher information (FI) contained in upper (or lower) record values and associated record times generated from a sequence of i.i.d. continuous variables. For the record data obtained from a random sample of fixed size, we establish an interesting relationship between its FI content and the FI in the data consisting of sequential maxima. We also consider the record data from an inverse sampling plan (Samaniego and Whitaker, 1986). We apply the general results to evaluate the FI in upper as well as lower records data from the exponential distribution for both sampling plans. Further, we discuss the implication of our results to statistical inference from these record data. Received: December 2001 Acknowledgements. This research was supported by Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) grants 7990089 and 1010222 of Chile. We would like to thank the Department of Statistics at the University of Concepción for its hospitality during the stay of H. N. Nagaraja in Chile in March of 2000, when the initial work was done. We are grateful to the two referees for various comments that let to improvements in the paper.  相似文献   

2.
In this article we present results on the Shannon information (SI) contained in upper (lower) record values and associated record times in a sequence of i.i.d continuous variables. We then establish an interesting relationship between the SI content of a random sample of fixed size, and the SI in the data consisting of sequential maxima. We also consider the information contained in the record data from an inverse sampling plan (ISP).  相似文献   

3.
The Shannon entropy of a random variable has become a very useful tool in Probability Theory. In this paper we extend the concept of cumulative residual entropy introduced by Rao et al. (in IEEE Trans Inf Theory 50:1220–1228, 2004). The new concept called generalized cumulative residual entropy (GCRE) is related with the record values of a sequence of i.i.d. random variables and with the relevation transform. We also consider a dynamic GCRE obtained using the residual lifetime. For these concepts we obtain some characterization results, stochastic ordering and aging classes properties and some relationships with other entropy concepts.  相似文献   

4.
Recently, there is a growing trend to offer guaranteed products where the investor is allowed to shift her account/investment value between multiple funds. The switching right is granted a finite number of times per year, i.e. it is American style with multiple exercise possibilities. In consequence, the pricing and the risk management is based on the switching strategy which maximizes the value of the guarantee put-option. We analyze the optimal stopping problem in the case of one switching right within different model classes and compare the exact price with the lower price bound implied by the optimal deterministic switching time. We show that, within the class of log-price processes with independent increments, the stopping problem is solved by a deterministic stopping time if (and only if) the price process is in addition continuous. Thus, in a sense, the Black and Scholes model is the only (meaningful) pricing model where the lower price bound gives the exact price. It turns out that even moderate deviations from the Black and Scholes model assumptions give a lower price bound which is really below the exact price. This is illustrated by means of a stylized stochastic volatility model setup.  相似文献   

5.
Estimation in the interval censoring model is considered. A class of smooth functionals is introduced, of which the mean is an example. The asymptotic information lower bound for such functionals can be represented as an inner product of two functions. In case 1, i.e. one observation time per unobservable event time, both functions can be given explicitly. We mainly consider case 2, with two observation times for each unobservable event time, in the situation that the observation times can not become arbitrarily close to each other. For case 2, one of the functions in the inner product can only be given implicitly as solution to a Fredholm integral equation. We study properties of this solution and, in a sequel to this paper, prove that the nonparametric maximum likelihood estimator of the functional asymptotically reaches the information lower bound.  相似文献   

6.
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k?1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k?1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k?1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.  相似文献   

7.
Coordination of capital taxation among asymmetric countries   总被引:1,自引:0,他引:1  
This paper studies international fiscal coordination in a world of integrated markets and sovereign national governments. Mobile capital and immobile labor are taxed in order to finance a fixed budget. This generates productive inefficiency. Two fiscal reforms are considered: a minimum capital tax level and a tax range, i.e., a minimum plus a maximum capital tax level. It is shown that the introduction of a lower bound to the capital tax level is never preferred to fiscal competition by all countries while there always exists a combination of both a lower and an upper bound (i.e., a tax range) which is unanimously accepted.  相似文献   

8.
George P. Yanev 《Metrika》2012,75(6):743-760
We characterize the exponential distribution as the only one which satisfies a regression condition. This condition involves the regression function of a fixed record value given two other record values, one of them being previous and the other next to the fixed record value, and none of them are adjacent. In particular, it turns out that the underlying distribution is exponential if and only if given the first and last record values, the expected value of the median in a sample of record values equals the sample midrange.  相似文献   

9.
Precedence-type tests based on order statistics are simple and efficient nonparametric tests that are very useful in the context of life-testing, and they have been studied quite extensively in the literature; see Balakrishnan and Ng (Precedence-type tests and applications. Wiley, Hoboken, 2006). In this paper, we consider precedence-type tests based on record values and develop specifically record precedence test, record maximal precedence test and record-rank-sum test. We derive their exact null distributions and tabulate some critical values. Then, under the general Lehmann alternative, we derive the exact power functions of these tests and discuss their power under the location-shift alternative. We also establish that the record precedence test is the uniformly most powerful test for testing against the one-parameter family of Lehmann alternatives. Finally, we discuss the situation when we have insufficient number of records to apply the record precedence test and then make some concluding remarks.  相似文献   

10.
The Stringer bound is a widely used nonparametric 100(1 -α)% upper confidence bound for the fraction of errors in an accounting population. This bound has been found in practice to be rather conservative, but no rigorous mathematical proof of the correctness of the Stringer bound as an upper confidence bound is known and also no counterexamples are available. In a pioneering paper Bickel (1992) has given some fixed sample support to the bound's conservatism together with an asymptotic expansion in probability of the Stringer bound, which has led to his claim of the asymptotic conservatism of the Stringer bound. In the present paper we obtain expansions of arbitrary order of the coefficients in the Stringer bound. As a consequence we obtain Bickel's asymptotic expansion with probability 1 and we show that the asymptotic conservatism holds for confidence levels 1 -α, with α∈ (0,1/2]. It means that in general also in a finite sampling situation the Stringer bound does not necessarily have the right confidence level. Based on our expansions we propose a modified Stringer bound which has asymptotically precisely the right nominal confidence level. Finally, we discuss other consequences of the expansions of the Stringer bound such as a central limit theorem, a law of the iterated logarithm and the functional versions of them.  相似文献   

11.
E. Jolivet 《Metrika》1984,31(1):349-360
Summary The speed of convergence of moment density estimators for stationary point processes is studied. Under relevant assumptions the order of magnitude for its upper bound is the same as in the i.i.d. case, when the process is Brillinger-mixing. The case of convariance density estimators is also considered.  相似文献   

12.
Is there a long run demand for currency in China?   总被引:1,自引:0,他引:1  
The record of Chinese monetary authorities at targeting MO in the late eighties early nineties is rather poor. This paper thus first aims at determining whether the instability of currency demand is responsible for this. By so doing we show, using adequate econometric techniques, that a long-run demand for currency did exist over the 1988–1993 period, with quarterly data.Most previous studies concluded that the income elasticity of currency demand in China is very high. The second objective of the paper is to test for the robustness of this result. We show that this income elasticity is unity when proper account is taken of institutional variables representative of the transition process.Abbreviations ADF Augmented Dickey Fuller - ARCH Auto Regressive Conditional Heteroskedasticity - IMF International Monetary Fund - LDCs Less Developed Countries - M0 currency - M1 narrow money - M2 broad money - PBC People's Bank of China - VAR Vector Auto Regressive Model Comments on an earlier version by my colleagues Christian Bordes and Dominique Lacoue-Labarthe and by an anonymous referee were very useful in improving the present paper. It also benefited from comments by participants at the annual conference of the (UK) Chinese Economic Society in December 1995. However, I remain solely responsible for all remaining errors.  相似文献   

13.
In this paper, we consider a family of bivariate distributions which is a generalization of the Morgenstern family of bivariate distributions. We have derived some properties of concomitants of record values which characterize this generalized class of distributions. The role of concomitants of record values in the unique determination of the parent bivariate distribution has been established. We have also derived properties of concomitants of record values which characterize each of the following families viz Morgenstern family, bivariate Pareto family and a generalized Gumbel’s family of bivariate distributions. Some applications of the characterization results are discussed and important conclusions based on the characterization results are drawn.  相似文献   

14.
15.
In recent speeches Treasury Ministers have coined a new slogan. They argue that inflation is not an alternative to high unemployment but a fundamental cause of it. They use this slogan to attack those who suggest that thefight against inflation should be slackened - at least briefly - in order to reduce unemployment. In this Economic e iewpoint we examine the arguments about the relation between inflation and unemployment. We suggest that although inflation may be a cause of unemployment in the long term there is an inescapable short-term choice to be made between reducing unemployment and reducing inflation. We explain why this choice arises and also discuss the longer-term effects of counter-inflationary policies. Finally we examine the record of this Government's policies so far.  相似文献   

16.
Our objective is to find a simple, robust, reasonably powerful test for a shift in one or more of the slopes in a linear time series model at some unknown point of time. Two such tests are ‘Chow's test’ (1960) for a shift at the midpoint of the record and the ‘Farley-Hinich test’ (1970b); both can be performed easily with standard regression programs. In section 2, we compare the asymptotic properties of these tests when the disturbance variance is known. As expected, Chow's test is superior when the true shift is near the middle of the record; with a single, uniformly-distributed explanatory variable, the Farley-Hinich tests dominates over the remaining eighty-four percent of the record. In section 3, we describe the results of some Monte Carlo experiments with a finite sample, which can be summarized as follows. (i) The asymptotic results of section 2 were appropriate for finite sample power comparisons. (ii) The relative performance of the two tests does not depend appreciably on whether the variance is known. (iii) The likelihood ratio test, which is far more costly to perform than the other two tests, does not dominate either Chow's test or the Farley-Hinich test; it has moderately more power at the ends of the record, moderately less in the middle. The conclusion is clear: at low cost (in terms of computer cost and lost power), one can reduce the probability of over- looking a structural shift by routinely performing Chow's test or the Farley-Hinich test.  相似文献   

17.
In this paper, we derive exact explicit expressions for the single, double, triple and quadruple moments of the upper record values from a generalized Pareto distribution. We then use these expressions to compute the mean, variance, and the coefficients of skewness and kurtosis of certain linear functions of record values. Finally, we develop approximate confidence intervals for the location and scale parameters of the generalized Pareto distribution using the Edgeworth approximation and compare them with the intervals constructed through Monte Carlo simulations. Received: June 1999  相似文献   

18.
Nonparametric methods for measuring productivity indexes based on bounds for the underlying production technology are presented. Following Banker and Maindiratta, the lower bound is obtained from a primal approach while the upper bound corresponds to a dual approach to nonparametric production analysis. These nonparametric bounds are then used to estimate input-based and output-based distance functions. These radial measures provide the basis for measuring productivity indexes. Application to times series data on U.S. agriculture indicates a large gap between the primal lower bound and the dual upper bound. This generates striking differences between the primal and dual nonparametric productivity indexes.Respectively, professor and associate professor of Agricultural Economics, University of Wisconsin-Madison. Seniority of authorship is equally shared. We would like to thank Rolf Färe and an anonymous reviewer for useful comments on an earlier draft of the paper. This research was supported in part by a Hatch grant from the College of Agriculture and Life Sciences, University of Wisconsin-Madison.  相似文献   

19.
Bairamov et al. (Aust N Z J Stat 47:543–547, 2005) characterize the exponential distribution in terms of the regression of a function of a record value with its adjacent record values as covariates. We extend these results to the case of non-adjacent covariates. We also consider a more general setting involving monotone transformations. As special cases, we present characterizations involving weighted arithmetic, geometric, and harmonic means.  相似文献   

20.
In the two-sample prediction problem, record values from the present sample may be used as predictors of order statistics from a future sample. In this paper, we investigate the nearness of record statistics (upper and lower) to order statistics from a location-scale family of distributions in the sense of Pitman closeness and discuss the corresponding monotonicity properties. We then determine the closest record value to a specific order statistic from a future sample. Even though in general it depends on the parent distribution, exact and explicit expressions are derived for the required probabilities in the case of exponential and uniform distributions, and some computational results are presented as well. Finally, we consider the mean squared error criterion and examine the corresponding results in the exponential case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号