首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
We present a particular class of measure spaces, hyperfinite Loeb spaces, as a model of situations where individual players are strategically negligible, as in large non-anonymous games, or where information is diffused, as in games with imperfect information. We present results on the existence of Nash equilibria in both kinds of games. Our results cover the case when the action sets are taken to be the unit interval, results now known to be false when they are based on more familiar measure spaces such as the Lebesgue unit interval. We also emphasize three criteria for the modelling of such game-theoretic situations—asymptotic implementability, homogeneity and measurability—and argue for games on hyperfinite Loeb spaces on the basis of these criteria. In particular, we show through explicit examples that a sequence of finite games with an increasing number of players or sample points cannot always be represented by a limit game on a Lebesgue space, and even when it can be so represented, the limit of an existing approximate equilibrium may disappear in the limit game. Thus, games on hyperfinite Loeb spaces constitute the ‘right' model even if one is primarily interested in capturing the asymptotic nature of large but finite game-theoretic phenomena.  相似文献   

2.
A trend in actuarial finance is to combine technical risk with interest risk. If Yt , t = 1, 2, denotes the timevalue of money (discount factors at time t ) and Xt the stochastic payments to be made at time t , the random variable of interest is often the scalar product of these two random vectors V = Xt Yt . The vectors X and Y are supposed to be independent, although in general they have dependent components. The current insurance practice based on the law of large numbers disregards the stochastic financial aspects of insurance. On the other hand, introduction of the variables Y 1, Y 2, to describe the financial aspects necessitates estimation or knowledge of their distribution function.
We investigate some statistical models for problems of insurance and finance, including Risk Based Capital/Value at Risk, Asset Liability Management, the distribution of annuities, cash flow evaluations (in the framework of pension funds, embedded value of a portfolio, Asian options) and provisions for claims incurred, but not reported (IBNR).  相似文献   

3.
A class of stochastic unit-root bilinear processes, allowing for GARCH-type effects with asymmetries, is studied. Necessary and sufficient conditions for the strict and second-order stationarity of the error process are given. The strictly stationary solution is shown to be strongly mixing under mild additional assumptions. It follows that, in this model, the standard (non-stochastic) unit-root tests of Phillips–Perron and Dickey–Fuller are asymptotically valid to detect the presence of a (stochastic) unit-root. The finite sample properties of these tests are studied via Monte-Carlo experiments.  相似文献   

4.
Some of the recently developed models to deal with economic problems involving uncertainty are based on simplifying assumptions on the nature of the stochastic law of the environment influencing economic decisions. Relying on the theory of martingales, we derive some general results on the asymptotic behavior of two dynamic processes that are of interest in the theory of intertemporal resource allocation. The first example is related to the ‘turnpike’ theory of optimal allocation. The second is addressed to the question of allocation of a scarce resource by using prices when the supply of the resource is random.  相似文献   

5.
The layout of production facilities is an important determinant of the productivity potential of a manufacturing enterprise. It is particularly important in the design of assembly lines where the objective is to assign tasks to work stations in such a way as to minimize total variable production costs.Early approaches to the line balancing problem assumed known constant task times and sought a line layout which would produce the desired output with the fewest number of work stations, which is equivalent to minimizing idle time. Studies have shown that task times are random variables, therefore the cost of task incompletion must be considered a part of total production cost. Incompletion cost will be the cost of repairing or completing tasks which cannot be completed within the cycle time after the item has reached the end of the assembly line.This paper describes a methodology for designing approximately minimum cost paced assembly lines under conditions of random task times and off-line repair of incompleted tasks. Task times are assumed to be normally distributed random variables with known means and variances. The methodology consists of heuristically identifying a large number of feasible balances for each of which total costs are computed. The line design with the lowest total is retained as the “best.”In order to evaluate candidate line layouts, a total cost model is developed. Total cost is the sum of normal operating cost—which is simply a function of the number of work stations—and the cost of repairing products containing incompleted tasks. Because this latter cost is a random variable for a given balance, the expected value is used to evaluate a candidate layout. The cost associated with one or more workers exceeding the cycle time is the product of the probability of this happening and the expected cost of off-line repair.The heuristic method for generating feasible balances builds work stations from continually updated lists of precedence satisfying tasks. Qualifying tasks are added to the station as long as the probability of the station exceeding the cycle time remains below a pre-specified threshold. The methodology requires systematically varying this threshold to permit a lowest total cost solution to emerge. The process of generating a large number of balances for a particular threshold is efficient. Evaluating the total costs of the resulting balances takes the majority of the computational time.An experiment was conducted in order to compare the above cost-effective methodology with a purely deterministic approach and a commonly used industrial approximation method for dealing with task time variability. The experiment applied the three methods to four problems from the literature under a variety of repair cost and time variance conditions. In 21 of the 24 cases studied, the stochastic method produced a lower cost balance than the two alternatives. In the remaining 3 cases, the deterministic method also found the lowest cost balance. The stochastic method saved an average of 22.5 percent in total operating cost over the deterministic method and 8.4 percent over the industrial method.The experiment clearly showed the need to explicitly consider task time variability in arriving at a line balance. The stochastic approach of this paper offers large potential savings with no risk of obtaining a less desirable balance and so should be considered for implementation whenever there is a variation in task times. Even for large-scale problems, the computational cost is infinitesimal in the context of assembly line balancing, where very small improvements in productivity can mean substantial increments to profitability.  相似文献   

6.
Aiting Shen  Andrei Volodin 《Metrika》2017,80(6-8):605-625
In the paper, the Marcinkiewicz–Zygmund type moment inequality for extended negatively dependent (END, in short) random variables is established. Under some suitable conditions of uniform integrability, the \(L_r\) convergence, weak law of large numbers and strong law of large numbers for usual normed sums and weighted sums of arrays of rowwise END random variables are investigated by using the Marcinkiewicz–Zygmund type moment inequality. In addition, some applications of the \(L_r\) convergence, weak and strong laws of large numbers to nonparametric regression models based on END errors are provided. The results obtained in the paper generalize or improve some corresponding ones for negatively associated random variables and negatively orthant dependent random variables.  相似文献   

7.
Ornstein–Uhlenbeck models are continuous-time processes which have broad applications in finance as, e.g., volatility processes in stochastic volatility models or spread models in spread options and pairs trading. The paper presents a least squares estimator for the model parameter in a multivariate Ornstein–Uhlenbeck model driven by a multivariate regularly varying Lévy process with infinite variance. We show that the estimator is consistent. Moreover, we derive its asymptotic behavior and test statistics. The results are compared to the finite variance case. For the proof we require some new results on multivariate regular variation of products of random vectors and central limit theorems. Furthermore, we embed this model in the setup of a co-integrated model in continuous time.  相似文献   

8.
The aim of this paper is to present several stochastic analogs of classical formulas for the gamma function. The obtained results provide representation of some random variables as finite or infinite products of independent random variables. Examples include generalized gamma, normal, beta and other distributions.  相似文献   

9.
We provide analytical formulae for the asymptotic bias (ABIAS) and mean-squared error (AMSE) of the IV estimator, and obtain approximations thereof based on an asymptotic scheme which essentially requires the expectation of the first stage F-statistic to converge to a finite (possibly small) positive limit as the number of instruments approaches infinity. Our analytical formulae can be viewed as generalizing the bias and MSE results of [Richardson and Wu 1971. A note on the comparison of ordinary and two-stage least squares estimators. Econometrica 39, 973–982] to the case with nonnormal errors and stochastic instruments. Our approximations are shown to compare favorably with approximations due to [Morimune 1983. Approximate distributions of kk-class estimators when the degree of overidentifiability is large compared with the sample size. Econometrica 51, 821–841] and [Donald and Newey 2001. Choosing the number of instruments. Econometrica 69, 1161–1191], particularly when the instruments are weak. We also construct consistent estimators for the ABIAS and AMSE, and we use these to further construct a number of bias corrected OLS and IV estimators, the properties of which are examined both analytically and via a series of Monte Carlo experiments.  相似文献   

10.
We compute the expected product of two correlated Brownian area integrals, a problem that arises in the analysis of a popular sorting algorithm. Along the way we find three different formulas for the expectation of the product of the absolute values of two standard normal random variables with correlation θ . These two formulas are found: (a) via conditioning and the non-central chi-square distribution; (b) via Mehler's formula; (c) by representing the correlated normal random variables in terms of independent normal's and integration using polar coordinates.  相似文献   

11.
This paper studies stochastic stability methods applied to processes on general state spaces. This includes settings in which agents repeatedly interact and choose from an uncountable set of strategies. Dynamics exist for which the stochastically stable states differ from those of any reasonable finite discretization. When there are a finite number of rest points of the unperturbed dynamic, sufficient conditions for analogues of results from the finite state space literature are derived and studied. Illustrative examples are given.  相似文献   

12.
When the coefficient of variation, namely the ratio of the standard deviation over the mean approaches zero as the number of economic agents becomes large, a system is called self-averaging. Otherwise, it is non-self-averaging. Most economic models take it for granted that the economic system is self-averaging. However, they are based on the extremely unrealistic assumption that all the economic agents face the same probability distribution, and that micro shocks are independent. Once these unrealistic assumptions are dropped, non-self-averaging behavior naturally emerges. Using a simple stochastic growth model, this paper demonstrates that the coefficient of variation of aggregate output or GDP does not go to zero even if the number of sectors or economic agents goes to infinity. Non-self-averaging phenomena imply that even if the number of economic agents is large, dispersion could remain significant, and we cannot legitimately focus solely on the means of aggregate variables. This, in turn, means that the standard microeconomic foundations based on representative agents have little value for they are meant to provide us with accurate dynamics of the means of aggregate variables. Contrary to the main stream view, micro-founded macroeconomics such as a dynamic general equilibrium model does not provide solid micro foundations.  相似文献   

13.
We introduce closed-form transition density expansions for multivariate affine jump-diffusion processes. The expansions rely on a general approximation theory which we develop in weighted Hilbert spaces for random variables which possess all polynomial moments. We establish parametric conditions which guarantee existence and differentiability of transition densities of affine models and show how they naturally fit into the approximation framework. Empirical applications in option pricing, credit risk, and likelihood inference highlight the usefulness of our expansions. The approximations are extremely fast to evaluate, and they perform very accurately and numerically stable.  相似文献   

14.
An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work.  相似文献   

15.
Abstract The problem of numerically pricing credit default index swaptions on a large number of names is considered. We place ourselves in a stochastic intensity framework, where Ornstein-Uhlenbeck-type correlated processes are used to model both firms’ distance to default and a macroeconomic state variable. Here the default of the firms’ follows the reduced-form approach and the (random) intensity of the default depends on the behavior of the diffusion processes. We propose here a numerical method based on both a Monte Carlo and a deterministic approach for solving PDEs by finite differences. Numerical tests demonstrate the efficiency and the robustness of the proposed procedure.  相似文献   

16.
A strong law of large numbers for a triangular array of strictly stationary associated random variables is proved. It is used to derive the pointwise strong consistency of kernel type density estimator of the one-dimensional marginal density function of a strictly stationary sequence of associated random variables, and to obtain an improved version of a result by Van Ryzin (1969) on the strong consistency of density estimator for a sequence of independent and identically distributed random variables.  相似文献   

17.
In his seminal paper, Harter (1951) derived the exact distribution of Wald’s classification statistic. In this note, we consider the more general problem of deriving the exact distribution of the product XY when X and Y are independent student’s t random variables with any degrees of freedom. Our results are simpler and more general than those presented by Harter (1951).  相似文献   

18.
This article focuses on a recent concept of covariation for processes taking values in a separable Banach space $B$ and a corresponding quadratic variation. The latter is more general than the classical one of Métivier and Pellaumail. Those notions are associated with some subspace $\chi $ of the dual of the projective tensor product of $B$ with itself. We also introduce the notion of a convolution type process, which is a natural generalization of the Itô process and the concept of $\bar{\nu }_0$ -semimartingale, which is a natural extension of the classical notion of semimartingale. The framework is the stochastic calculus via regularization in Banach spaces. Two main applications are mentioned: one related to Clark–Ocone formula for finite quadratic variation processes; the second one concerns the probabilistic representation of a Hilbert valued partial differential equation of Kolmogorov type.  相似文献   

19.
Different aggregate preference orders based on rankings and top choices have been defined in the literature to describe preferences among items in a fixed set of alternatives. A useful tool in this framework is constituted by random utility models, where the utility of each alternative, or object, is represented by a random variable, indexed by the object, which, for example, can capture the variability of preferences over a population. Applications are derived in diverse research fields, including computer science, management science and reliability. Recently, some stochastic ordering conditions have been provided for comparing alternatives by means of some aggregate preference orders in the case of independent random utility variables by Joe (Math Soc Sci 43:391–404, 2002). In this paper we provide new conditions, based on some joint stochastic orderings, for aggregate preference orders among the alternatives in the case of dependent random utilities. We also provide some examples of application in different research fields.   相似文献   

20.
Although attention has been given to obtaining reliable standard errors for the plug-in estimator of the Gini index, all standard errors suggested until now are either complicated or quite unreliable. An approximation is derived for the estimator by which it is expressed as a sum of IID random variables. This approximation allows us to develop a reliable standard error that is simple to compute. A simple but effective bias correction is also derived. The quality of inference based on the approximation is checked in a number of simulation experiments, and is found to be very good unless the tail of the underlying distribution is heavy. Bootstrap methods are presented which alleviate this problem except in cases in which the variance is very large or fails to exist. Similar methods can be used to find reliable standard errors of other indices which are not simply linear functionals of the distribution function, such as Sen’s poverty index and its modification known as the Sen–Shorrocks–Thon index.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号