首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study attempts to apply real options and expand the model designed by Lin and Huang [Lin, T.T., Huang, Y.T.: J. Technol. Manage. 8(3), 59–78 (2003)], which helps venture capital (VC) companies to optimize project exit decisions. The expected discounted factor and a jump-diffusion process combine to assess the value of a start-up company, and determine the threshold of the exit timing of liquidation or convertibility for establishing the optimal disinvestment evaluation model for VC companies. When the project value is below VL*V_L^\ast, the VC company carries out liquidation, but when the project value exceeds VC*V_C^\ast, the VC company performs convertibility. The project value is ranging between (VL *,VC*)\left({V_L ^\ast,V_C^\ast}\right), and the best choice is holding the decision and waiting to carry out the rights of liquidation and convertibility next time. Besides, this work attempts to identify the expected discounted time in terms of the investment time for VC companies.  相似文献   

2.
Miss Sharda 《Metrika》1973,20(1):93-100
This paper studies the transient behaviour of a queueing problem in which two type of units form queuesQ 1 andQ 2 before a single server. The units ofQ 1 are considered first for service. As soon as there are no units ofQ 1 in the system, a batch ofm units fromQ 2 or the whole queue length shifts toQ 1 and are served. Probability generating functions for the queue lengths for the two cases have been obtained.  相似文献   

3.
Let be an interval order on a topological space (X, τ), and let x ˜* y if and only if [y z x z], and x ˜** y if and only if [z x z y]. Then ˜* and ˜** are complete preorders. In the particular case when is a semiorder, let x ˜0 y if and only if x ˜* y and x ˜** y. Then ˜0 is a complete preorder, too. We present sufficient conditions for the existence of continuous utility functions representing ˜*, ˜** and ˜0, by using the notion of strong separability of a preference relation, which was introduced by Chateauneuf (Journal of Mathematical Economics, 1987, 16, 139–146). Finally, we discuss the existence of a pair of continuous functions u, υ representing a strongly separable interval order on a measurable topological space (X, τ, μ, ).  相似文献   

4.
Structural instability of the core   总被引:1,自引:0,他引:1  
Let σ be a q-rule, where any coalition of size q, from the society of size n, is decisive. Let w(n,q)= 2q-n+1 and let W be a smooth ‘policy space’ of dimension w. Let U(W)N be the space of all smooth profiles on W, endowed with the Whitney topology. It is shown that there exists an ‘instability dimension’ w*(σ) with 2w*(σ)w(n,q) such that:
1. (i) if ww*(σ), and W has no boundary, then the core of σ is empty for a dense set of profiles in U(W)N (i.e., almost always),
2. (ii) if ww*(σ)+1, and W has a boundary, then the core of σ is empty, almost always,
3. (iii) if ww*(σ)+1 then the cycle set is dense in W, almost always,
4. (iv) if ww*(σ)+2 then the cycle set is also path connected, almost always.
The method of proof is first of all to show that if a point belongs to the core, then certain generalized symmetry conditions in terms of ‘pivotal’ coalitions of size 2qn must be satisfied. Secondly, it is shown that these symmetry conditions can almost never be satisfied when either W has empty boundary and is of dimension w(n,q) or when W has non-empty boundary and is of dimension w(n,q)+1.  相似文献   

5.
We consider the problem of testing the null hypothesis of no change against the alternative of multiple change points in a series of independent observations when the changes are in the same direction. We extend the tests of Terpstra (1952), Jonckheere (1954) and Puri (1965) to the change point setup. We obtain the asymptotic null distribution of the proposed tests. We also give approximations for their limiting critical values and tables of their finite sample Monte Carlo critical values. The results of Monte Carlo power studies conducted to compare the proposed tests with some competitors are reported. This research was supported by research grant SS045 of Kuwait University. Acknowledgments. We wish to thank the two referees for their comments and suggestions which have significantly improved the presentation. We are particularly thankful to one of the referees for suggesting the test statistics Tn1 * (k) and Tn2 * (k).  相似文献   

6.
This paper constructs a duopoly model considering corporate social responsibility (CSR) and market's sensitivity to CSR (e) and analyzes the equilibrium results, the condition of CSR implementation and the optimal CSR level (β* ) of Model CC (two enterprises implement CSR) and Model CN (only one enterprise implements CSR). The results show that β* is affected by competitors and e. e, marginal cost (c) and cost difference affect the equilibrium results and the comparative results. Reducing c and improving e can promote social welfare. Consumer surplus under Model CC is highest. CSR has a negative effect on social welfare under certain conditions.  相似文献   

7.
Rainer Göb 《Metrika》1997,45(1):131-169
Consider lots of discrete items 1, 2, …,N with quality characteristicsx 1,x 2, …,x N . Leta be a target value for item quality. Lot quality is identified with the average square deviation from target per item in the lot (lot average square deviation from target). Under economic considerations this is an appropriate lot quality indicator if the loss respectively the profit incurred from an item is a quadratic function ofx i −a. The present paper investigates tests of significance on the lot average square deviationz under the following assumptions: The lot is a subsequence of a process of production, storage, transport; the random quality characteristics of items resulting from this process are i.i.d. with normal distributionN(μ, σ 2); the target valuea coincides with the process meanμ.  相似文献   

8.
K. Murari 《Metrika》1972,18(1):110-119
Summary This paper studies the steady-state behaviour of a discrete-time, single-channel, first-come-first-served queueing problem wherein (i) the arrivals at two consecutive time-marks are correlated (ii) the service is accomplished in phases and (iii) the completion of phases at two consecutive time-marks are correlated. The probability generating function (p.g.f.) of the number of phases waiting and in service is obtained. Further, the p.g.f. of queue length is obtained for the case when each unit demands only one phase of service, and the mean queue length is derived therefrom. Finally, the p.g.f. and the mean queue length are discussed for the special cases, (i) r=0,R≠0, (ii) r≠0,R=0, (iii) r=0,R=0, (iv) r≠0,R=−I, (v) r=0,R=−I, (vi) r=−I,R≠0, (vii) r=−I,R=0, (viii) r=I,R≠0, (ix) r=I,R=0, (x) r≠0,R=I, (xi) r=0,R=I, where r andR are the respective coefficients of correlation between arrivals and completion of phases at two consecutive time-marks.  相似文献   

9.
A periodical replacement model for a multi-unit system subject to failure rate interaction between units is presented. In a N-unit system, one is dominant unit and the others are secondary units. When a secondary unit fails, it increases the failure rate of the dominant unit by some amount, while the failure of dominant unit make the system into total failure. Without failure rate interaction, the failure rates of all units also increase with age. All single failures of secondary units are assumed to be corrected by minimal repairs. The system is replaced at age T, or on total failure, whichever occurs first. The aim of this paper is to derive the expected cost rate per unit time as a criterion of optimality and to seek the optimal period T * to minimize that cost rate.  相似文献   

10.
Dr. H. Vogt 《Metrika》1973,20(1):114-121
Summary We compare the OC-curvesL n.c (p) (1) andL n.c * (p) (2). The first is founded on the binomial distribution, the latter relates to the Poisson distribution and is often used as approximation. These OC-curves occur in Statistical Quality Control as probabilities for the acception of a lot as approximations for such probabilities; they are regarded as functions of the fraction defectivep. It is shown that the two OC-curves have exactly one intersection point between 0 and 1, if the acceptance numberc is 1 and the sample sizen is >c+1.Forp between 0 and the intersection pointp s we have thenL n.c.(p)>L n.c * (p); from p s <p1 followsL n.c(p)n.c * (p).An interval is given which coversp s and with an example it is shown how one might use the results of this paper for the construction of sampling plans.  相似文献   

11.
Se, essendof la funzione obiettivo del problema, {x k } e {f(x k )} sono le successioni delle approssimazioni rispettivamente di una soluzione ottimax * e dell' ottimof(x *) generate da un noto algoritmo di direzioni ammissibili a parametri antizigzag k , mostriamo che per avere (a) lim k f(x *)=f(x *) basta assumere lim k k =0. Inoltre, ove si assuma in più la stretta convessità dif, si ha anche (b) lim k x k =x *. Da quest'ultima condizione deriviamo infine specifiche ipotesi, in ordine alla (b), per il caso particolare del problema di trasporto stocastico.
Summary The aim of the present paper is to analyze, without differentiability of the objective functionf, the convergence of a known «feasible directions» algorithm for constrained optimization problems having the constraints linear [8], 6.5.2.In these circumstances (i.e. iff is not differentiable) one must, almost in general, verify some preliminary conditions to obtain convergence [4]. Nevertheless, this work is not always easy to accomplish particularly in absence of differentiability.Here, we establish that under the convexity assumption forf, the only condition lim k k =0, where the k are the antizigzag parameters, suffices to obtain the convergence of the algorithm, i.e. lim k f(x k )=opt., thex k being the approximate solutions to problem. The proof is obtained by application of the Th. 24.5, [6]. Successively, we consider the question if one has also the convergence of {x k } to optimal solution. By using now the Cor. 27.2.2, [6], we establish, for this purpose, that under an additional general qualification forf — precisely the strict convexity — the convergence of {x k } is also stated. Finally, we examine the above property for the stochastic transportation problem [1] for which we indicate special conditions in order to verify the latter convergence property.


pervenuto il 28-4-82  相似文献   

12.
Abstract In the financial literature, the problem of maximizing the expected utility of the terminal wealth has been investigated extensively (for a survey, see, e.g., Karatzas and Shreve (1998), p. 153, and references therein) by using different approaches. In this paper, we extend the existing literature in two directions. First, we let the utility function U(.) of the financial agent (who is a price taker) be implicitly defined through I(.)=(U (.))–1, which is assumed to be additively separable, i.e., I(.)=∑ k=1 N I k (.). Second, we solve the investment problem in the general affine term structure model proposed by Duffie and Kan (1996) in which the functions I k (.), k=1,...,N are associated to HARA utility functions (with possibly different risk aversion parameters), and we show that the utility maximization problem leads to a Riccati ODE. Moreover, we extend to the multi-factor framework the stability result proved in Grasselli (2003), namely, the almost-sure convergence of the solution with respect to the parameters of the utility function. Mathematics Subject Classification (2000): 91B28 Journal of Economic Literature Classification: G11  相似文献   

13.
Michael Kohler 《Metrika》1998,47(1):147-163
Let (X, Y) be a pair of random variables withsupp(X)⊆[0,1] l andEY 2<∞. Letm * be the best approximation of the regression function of (X, Y) by sums of functions of at mostd variables (1≤dl). Estimation ofm * from i.i.d. data is considered. For the estimation interaction least squares splines, which are defined as sums of polynomial tensor product splines of at mostd variables, are used. The knot sequences of the tensor product splines are chosen equidistant. Complexity regularization is used to choose the number of the knots and the degree of the splines automatically using only the given data. Without any additional condition on the distribution of (X, Y) the weak and strongL 2-consistency of the estimate is shown. Furthermore, for everyp≥1 and every distribution of (X, Y) withsupp(X)⊆[0,1] l ,y bounded andm * p-smooth, the integrated squared error of the estimate achieves up to a logarithmic factor the (optimal) rate   相似文献   

14.
Bayesian Hypothesis Testing: a Reference Approach   总被引:1,自引:0,他引:1  
For any probability model M={p(x|θ, ω), θεΘ, ωεΩ} assumed to describe the probabilistic behaviour of data xεX, it is argued that testing whether or not the available data are compatible with the hypothesis H0={θ=θ0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x0, ω), ωεΩ}, where the loss difference L(a0, θ, ω) –L(a0, θ, ω) is proportional to the amount of information δ(θ0, ω), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution π(θ, ω), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ∫∫δ(θ0, θ, ω)π(θ, ω|x)dθdω is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d0, x) =∫δπ(δ|x)dδ, of the amount of information δ(θ0, θ, ω) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log‐likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log‐likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing.  相似文献   

15.
S. Dahel  N. Giri  Y. Lepage 《Metrika》1994,41(1):363-374
LetX be ap-normal random vector with unknown mean and unknown covariance matrix and letX be partitioned asX=(X (1) ,X (2) , ...,X (r) ) whereX (j) is a subvector of dimensionp j such that j=1 r p j =p. We show that the tests, obtained by Dahel (1988), are locally minimax. These tests have been derived to confront Ho: =0 versusH 1: 0 on the basis of sample of sizeN, X 1, ..., XN, drawn fromX andr additional samples of sizeN j, U i (j) , i=1, ..., Nj, drawn fromX (1), ...X (r) respectively. We assume that the (r+1) samples are independent and thatN j>p j forj=0, 1, ..., r (N oN andp op). Whenr=2 andp=2, a Monte Carlo study is performed to compare these tests with the likelihood ratio test (LRT) given by Srivastava (1985). We also show that no locally most powerful invariant test exists for this problem.  相似文献   

16.
This article reviews the content-corrected method for tolerance limits proposed by Fernholz and Gillespie (2001) and addresses some robustness issues that affect the length of the corresponding interval as well as the corrected content value. The content-corrected method for k-factor tolerance limits consists of obtaining a bootstrap corrected value p * that is robust in the sense of preserving the confidence coefficient for a variety of distributions. We propose several location/scale robust alternatives to obtain robust corrected-content k-factor tolerance limits that produce shorter intervals when outlying observations are present. We analyze the Hadamard differentiability to insure bootstrap consistency for large samples. We define the breakdown point for the particular statistic to be bootstrapped, and we obtain the influence function and the value of the breakdown point for the traditional and the robust statistics. Two examples showing the advantage of using these robust alternatives are also included.  相似文献   

17.
A locally Lipschitz cooperative generalized game is described by its coalition worth function v defined on the set [0, 1]n of generalized (or fuzzy) coalitions of n players. We assume that v is positively homogeneous and locally Lipschitz. We propose the Clarke's generalized gradient ∂v(cN) of v at the coalition cN=(1,…,1) of all players as a set of solutions, and we study its property. We point out that it coincides with the core when v is super–additive and to the Shapley value when v is smooth.  相似文献   

18.
Lai  Min-Tsai 《Quality and Quantity》2009,43(3):471-479
In this paper, a repairable two-unit parallel system with failure rate interaction between units is studied. Failure rate interaction between units is described as follows. Unit-1 failure whenever occurs can increase the failure rate of unit-2 of some amount, while unit-2 failure can induce unit-1 into simultaneous failure. We consider a discrete replacement policy N based on the number of unit-1 failure. The system is replaced at the instant of the N-th failure of unit-1 or on simultaneous failure of the system. Our problem is to determine an optimal replacement policy N * such that the expected cost rate per unit time is minimized. The explicit expression of the expected cost rate per unit time is derived by introducing relative costs and the corresponding optimal number N* is also verified finite and unique under some specific conditions.  相似文献   

19.
Summary We consider in this paper the transient behaviour of the queuing system in which (i) the input, following a Poisson distribution, is in batches of variable numbers; (ii) queue discipline is ‘first come first served’, it being assumed that the batches are pre-ordered for service purposes; and (iii) service time distribution is hyper-exponential withn branches. The Laplace transform of the system size distribution is determined by applying the method of generating functions, introduced in queuing theory byBailey [1]. However, assuming steady state conditions to obtain, the problem is completely solved and it is shown that by suitably defining the traffic intensity factor,ϱ, the value,p 0, of the probability of no delay, remains the same in this case of batch arrivals also as in the case of single arrivals. The Laplace transform of the waiting time distribution is also calculated in steady state case from which the mean waiting time may be calculated. Some of the known results are derived as particular cases.  相似文献   

20.
LetX 1,X 2, …,X n be independent identically distributed random vectors in IR d ,d ⩾ 1, with sample mean and sample covariance matrixS n. We present a practicable and consistent test for the composite hypothesisH d: the law ofX 1 is a non-degenerate normal distribution, based on a weighted integral of the squared modulus of the difference between the empirical characteristic function of the residualsS n −1/2 (X j − ) and its pointwise limit exp (−1/2|t|2) underH d. The limiting null distribution of the test statistic is obtained, and a table with critical values for various choices ofn andd based on extensive simulations is supplied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号