首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract We discuss a practical method to price and hedge European contingent claims on assets with price processes which follow a jump-diffusion. The method consists of a sequence of trinomial models for the asset price and option price processes which are shown to converge weakly to the corresponding continuous time jump-diffusion processes. The main difference with many existing methods is that our approach ensures that the intermediate discrete time approximations generate models which are themselves complete, just as in the Black-Scholes binomial approximations. This is only possible by dropping the assumption that the approximations of increments of the Wiener and Poisson processes on our trinomial tree are independent, but we show that the dependence between these processes disappears in the weak limit. The approximations thus define an easy and flexible method for pricing and hedging in jump-diffusion models using explicit trees for hedging and pricing. Mathematics Subject Classification (2000): 60B10, 60H35 Journal of Economic Literature Classification: G13  相似文献   

2.
We consider kernel smoothed Grenander‐type estimators for a monotone hazard rate and a monotone density in the presence of randomly right censored data. We show that they converge at rate n2/5 and that the limit distribution at a fixed point is Gaussian with explicitly given mean and variance. It is well known that standard kernel smoothing leads to inconsistency problems at the boundary points. It turns out that, also by using a boundary correction, we can only establish uniform consistency on intervals that stay away from the end point of the support (although we can go arbitrarily close to the right boundary).  相似文献   

3.
Theoretical models of multi-unit, uniform-price auctions assume that the price is given by the highest losing bid. In practice, however, the price is usually given by the lowest winning bid. We derive the equilibrium bidding function of the lowest-winning-bid auction when there are k objects for sale and n bidders with unit demand, and prove that it converges to the bidding function of the highest-losing-bid auction if and only if the number of losers nk gets large. When the number of losers grows large, the bidding functions converge at a linear rate and the prices in the two auctions converge in probability to the expected value of an object to the marginal winner.  相似文献   

4.
Yun Li  Quanxi Shao 《Metrika》2007,66(1):89-104
A near-maximum is an observation which falls within a distance a of the maximum observation in an independent and identically distributed sample of size n. Subject to some conditions on the tail thickness of the population distribution, the number K n (a) of near-maxima is known to converge in probability to one or infinity, or in distribution to a shifted geometric law. In this paper we show that for all Burr XII distributions K n (a) converges almost surely to unity, but this convergence property may not become clear under certain cases even for very large n. We explore the reason of such slow convergence by studying a distributional continuity between Burr XII and Weibull distributions. We have also given a theoretical explanation of slow convergence of K n (a) for the Burr XII distributions by showing that the rate of convergence in terms of P{K n (a) > 1} tending to zero changes very little with the sample size n. Illustrations of the limiting behaviour K n (a) for the Burr XII and the Weibull distributions are given by simulations and real data. The study also raises an important issue that although the Burr XII provides overall better fit to a given data set than the Weibull distribution, cautions should be taken for the extrapolation of the upper tail behaviour in the case of slow convergence.   相似文献   

5.
We develop a consumer search model in which consumers may remain uncertain about product quality even after inspecting the product. We first consider the postsearch uncertainty regarding vertical quality, and characterize the separating equilibrium in which firms with different quality levels charge different prices. If quality information is not sufficiently transparent after the search, then prices between the low- and the high-quality products can either diverge or converge as the search cost decreases, depending on the degrees of horizontal and vertical product differentiation. We further extend the model to include the postsearch uncertainty about the horizontal match value and to endogenize the firm's quality choice.  相似文献   

6.
Let X denote a set of n elements or stimuli which have an underlying linear ranking L based on some attribute of comparison. Incomplete ordinal data is known about L, in the form of a partial order P on X. This study considers methods which attempt to induce, or reconstruct, L based only on the ordinal information in P. Two basic methods for approaching this problem are the cardinal and sequel construction methods. Exact values are computed for the expected error of weak order approximations of L from the cardinal and sequel construction methods. Results involving interval orders and semiorders for P are also considered. Previous simulation comparisons for cardinal and sequel construction methods on interval orders were found to depend on the specific model that was used to generate random interval orders, and were not found to hold for interval orders in general. Finally, we consider the likelihood that any particular linear extension of P is the underlying L.  相似文献   

7.
In the reliability studies, k-out-of-n systems play an important role. In this paper, we consider sharp bounds for the mean residual life function of a k-out-of-n system consisting of n identical components with independent lifetimes having a common distribution function F, measured in location and scale units of the residual life random variable X t  = (Xt|X > t). We characterize the probability distributions for which the bounds are attained. We also evaluate the so obtained bounds numerically for various choices of k and n.  相似文献   

8.
E. T. Salehi  M. Asadi 《Metrika》2012,75(4):439-454
We consider a (nk + 1)-out-of-n system with independent and nonidentical components. Under the condition that at time t the system has failed we study the past lifetime of the components of the system. The mean past lifetime of the components is defined and some of its properties are investigated. Stochastic comparisons are also made between the past lifetime of different systems.  相似文献   

9.
We consider a principal who is keen to induce his agents to work at their maximal effort levels. To this end, he samples n days at random out of the T days on which they work, and awards a prize of B dollars to the most productive agent. The principal’s policy (B, n) induces a strategic game Γ(B, n) between the agents. We show that to implement maximal effort levels weakly (or, strongly) as a strategic equilibrium (or, as dominant strategies) in Γ(B, n), at the least cost B to himself, the principal must choose a small sample size n. Thus less scrutiny by the principal induces more effort from the agents.The need for reduced scrutiny becomes more pronounced when agents have information of the history of past plays in the game. There is an inverse relation between information and optimal sample size. As agents acquire more information (about each other), the principal, so to speak, must “undo” this by reducing his information (about them) and choosing the sample size n even smaller.  相似文献   

10.
We consider the mixed systems composed of a fixed number of components whose lifetimes are i.i.d. with a known distribution which has a positive and finite variance. We show that a certain of the k-out-of-n systems has the minimal lifetime variance, and the maximal one is attained by a mixture of series and parallel systems. The number of the k-out-of-n system, and the probability weights of the mixture depend on the first two moments of order statistics of the parent distribution of the component lifetimes. We also show methods of calculating extreme system lifetime variances under various restrictions on the system lifetime expectations, and vice versa.  相似文献   

11.
We consider the uniformly most powerful unbiased (UMPU) one-sided test for the comparison of two proportions based on sample sizes m and n, i.e., the randomized version of Fisher's exact one-sided test. It will be shown that the power function of the one-sided UMPU-test based on sample sizes m and n can coincide with the power function of the UMPU-test based on sample sizes m+1 and n for certain levels on the entire parameter space. A characterization of all such cases with identical power functions is derived. Finally, this characterization is closely related to number theoretical problems concerning Fermat-like binomial equations. Some consequences for Fisher's original exact test will be discussed, too.  相似文献   

12.
We consider games with n players and r alternatives. In these games each player must choose one and only one alternative, reaching an ordered partition of the set of players. An extension of the Shapley value to this framework is studied. Received: 1 November 1997 / Accepted: 24 January 1999  相似文献   

13.
We consider the normalized least squares estimator of the parameter in a nearly integrated first-order autoregressive model with dependent errors. In a first step we consider its asymptotic distribution as well as asymptotic expansion up to order Op(T−1). We derive a limiting moment generating function which enables us to calculate various distributional quantities by numerical integration. A simulation study is performed to assess the adequacy of the asymptotic distribution when the errors are correlated. We focus our attention on two leading cases: MA(1) errors and AR(1) errors. The asymptotic approximations are shown to be inadequate as the MA root gets close to −1 and as the AR root approaches either −1 or 1. Our theoretical analysis helps to explain and understand the simulation results of Schwert (1989) and DeJong, Nankervis, Savin, and Whiteman (1992) concerning the size and power of Phillips and Perron's (1988) unit root test. A companion paper, Nabeya and Perron (1994), presents alternative asymptotic frameworks in the cases where the usual asymptotic distribution fails to provide an adequate approximation to the finite-sample distribution.  相似文献   

14.
In the present paper, we consider a (nk + 1)-out-of-n system with identical components where it is assumed that the lifetimes of the components are independent and have a common distribution function F. We assume that the system fails at time t or sometime before t, t > 0. Under these conditions, we are interested in the study of the mean time elapsed since the failure of the components. We call this as the mean past lifetime (MPL) of the components at the system level. Several properties of the MPL are studied. It is proved that the relation between the proposed MPL and the underlying distribution is one-to-one. We have shown that when the components of the system have decreasing reversed hazard then the MPL of the system is increasing with respect to time. Some examples are also provided.  相似文献   

15.
We consider the problem of testing the null hypothesis of no change against the alternative of multiple change points in a series of independent observations when the changes are in the same direction. We extend the tests of Terpstra (1952), Jonckheere (1954) and Puri (1965) to the change point setup. We obtain the asymptotic null distribution of the proposed tests. We also give approximations for their limiting critical values and tables of their finite sample Monte Carlo critical values. The results of Monte Carlo power studies conducted to compare the proposed tests with some competitors are reported. This research was supported by research grant SS045 of Kuwait University. Acknowledgments. We wish to thank the two referees for their comments and suggestions which have significantly improved the presentation. We are particularly thankful to one of the referees for suggesting the test statistics Tn1 * (k) and Tn2 * (k).  相似文献   

16.
Subsampling and the m out of n bootstrap have been suggested in the literature as methods for carrying out inference based on post-model selection estimators and shrinkage estimators. In this paper we consider a subsampling confidence interval (CI) that is based on an estimator that can be viewed either as a post-model selection estimator that employs a consistent model selection procedure or as a super-efficient estimator. We show that the subsampling CI (of nominal level 1−α for any α(0,1)) has asymptotic confidence size (defined to be the limit of finite-sample size) equal to zero in a very simple regular model. The same result holds for the m out of n bootstrap provided m2/n→0 and the observations are i.i.d. Similar zero-asymptotic-confidence-size results hold in more complicated models that are covered by the general results given in the paper and for super-efficient and shrinkage estimators that are not post-model selection estimators. Based on these results, subsampling and the m out of n bootstrap are not recommended for obtaining inference based on post-consistent model selection or shrinkage estimators.  相似文献   

17.
Let (W n ,n ≥ 0) denote the sequence of weak records from a distribution with support S = { α01,...,α N }. In this paper, we consider regression functions of the form ψ n (x) = E(h(W n ) |W n+1 = x), where h(·) is some strictly increasing function. We show that a single function ψ n (·) determines F uniquely up to F0). Then we derive an inversion formula which enables us to obtain F from knowledge of ψ n (·), ψ n-1(·), h(·) and F0).  相似文献   

18.
Summary Let x1…, xn be a sample from a distribution with infinite expectation, then for n→∞ the sample average x?n tends to +∞ with probability 1 (see [4]). Sometimes x?n contains high jumps due to large observations. In this paper we consider samples from the “absolute Cauchy” distribution. In practice, on may consider the logarithm of the observations as a sample from a normal distribution. So we found in our simulation. After rejecting the log-normality assumption, one will be tempted to regard the extreme observations as outliers. It is shown that the discarding of the outlying observations gives an underestimation of the expectation, variance and 99 percentile of the actual distribution.  相似文献   

19.
Quasi-Monte Carlo method (QMC) is an efficient technique for numerical integration. QMC provides a lower convergence rate, O(ln d n/n), than the standard Monte Carlo (MC), , where n is the number of simulations and d the nominal problem dimension. However, some studies in the literature have claimed that the QMC performs better than the MC method for d < 20/30 because of its dependence on d. Caflisch et al. (J Comput Finance 1(1):27–46, 1997) have proposed to extend the QMC superiority by ANOVA considerations. To this aim, we consider the Asian basket option pricing problem, where d is much higher than 30, by QMC simulation. We investigate the applicability of several path-generation constructions that have been proposed to overtake the dimensional drawback. We employ the principal component analysis, the linear transformation, the Kronecker product approximation and test their performance both in terms of computational cost and accuracy. Finally, we compare the results with those obtained by the standard MC.   相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号