首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
A frequently occurring problem is to find the maximum likelihood estimation (MLE) of p subject to pC (CP the probability vectors in R k ). The problem has been discussed by many authors and they mainly focused when p is restricted by linear constraints or log-linear constraints. In this paper, we construct the relationship between the the maximum likelihood estimation of p restricted by pC and EM algorithm and demonstrate that the maximum likelihood estimator can be computed through the EM algorithm (Dempster et al. in J R Stat Soc Ser B 39:1–38, 1997). Several examples are analyzed by the proposed method.  相似文献   

2.
Let be a family of choice probabilities that are caused by compatible latent factors. Then, there should exist an underlying common probability measure that in some degree induces each choice probability PS. The problem of the existence of P will be considered with respect to a general concept of induced probability. In addition, the relevance of this concept for Mathematical Economics and the Social Sciences will be discussed.  相似文献   

3.
4.
When two surveys carried out separately in the same population have common variables, it might be desirable to adjust each survey's weights so that they give equal estimates for the common variables. This problem has been studied extensively and has often been referred to as alignment or numerical consistency. We develop a design-based empirical likelihood approach for alignment and estimation of complex parameters defined by estimating equations. We focus on a general case when a single set of adjusted weights, which can be applied to both common and non-common variables, is produced for each survey. The main contribution of the paper is to show that the impirical log-likelihood ratio statistic is pivotal in the presence of alignment constraints. This pivotal statistic can be used to test hypotheses and derive confidence regions. Hence, the empirical likelihood approach proposed for alignment possesses the self-normalisation property, under a design-based approach. The proposed approach accommodates large sampling fractions, stratification and population level auxiliary information. It is particularly well suited for inference about small domains, when data are skewed. It includes implicit adjustments when the samples considerably differ in size. The confidence regions are constructed without the need for variance estimates, joint-inclusion probabilities, linearisation and re-sampling.  相似文献   

5.
ϕ-divergence statistics quantify the divergence between a joint probability measure and the product of its marginal probabilities on the basis of contingency tables. Asymptotic properties of these statistics are investigated either considering random sampling or stratified random sampling with proportional allocation and independence among strata. To finish same tests of hypotheses of independence are presented. The research in this paper was supported in part by DGICYT Grants No. PS91-0387 and No. PB91-0155. Their financial support is gratefully acknowledged.  相似文献   

6.
Pearn et al. (1999) considered a capability index C ′′ pmk, a new generalization of C pmk, for processes with asymmetric tolerances. In this paper, we provide a comparison between C ′′ pmk and other existing generalizations of C pmk on the accuracy of measuring process performance for processes with asymmetric tolerances. We show that the new generalization C ′′ pmk is superior to other existing generalizations of C pmk. Under the assumption of normality, we derive explicit forms of the cumulative distribution function and the probability density function of the estimated index . We show that the cumulative distribution function and the probability density function of the estimated index can be expressed in terms of a mixture of the chi-square distribution and the normal distribution. The explicit forms of the cumulative distribution function and the probability density function considerably simplify the complexity for analyzing the statistical properties of the estimated index . Received April 2000  相似文献   

7.
This paper provides closed-form likelihood approximations for multivariate jump-diffusion processes widely used in finance. For a fixed order of approximation, the maximum-likelihood estimator (MLE) computed from this approximate likelihood achieves the asymptotic efficiency of the true yet uncomputable MLE as the sampling interval shrinks. This method is used to uncover the realignment probability of the Chinese Yuan. Since February 2002, the market-implied realignment intensity has increased fivefold. The term structure of the forward realignment rate, which completely characterizes future realignment probabilities, is hump-shaped and peaks at mid-2004. The realignment probability responds quickly to economic news releases and government interventions.  相似文献   

8.
Xiuli Wang  Gaorong Li  Lu Lin 《Metrika》2011,73(2):171-185
In this paper, we apply empirical likelihood method to study the semi-parametric varying-coefficient partially linear errors-in-variables models. Empirical log-likelihood ratio statistic for the unknown parameter β, which is of primary interest, is suggested. We show that the proposed statistic is asymptotically standard chi-square distribution under some suitable conditions, and hence it can be used to construct the confidence region for the parameter β. Some simulations indicate that, in terms of coverage probabilities and average lengths of the confidence intervals, the proposed method performs better than the least-squares method. We also give the maximum empirical likelihood estimator (MELE) for the unknown parameter β, and prove the MELE is asymptotically normal under some suitable conditions.  相似文献   

9.
Anirban DasGupta 《Metrika》2000,51(3):185-200
In this article we describe some ways to significantly improve the Markov-Gauss-Camp-Meidell inequalities and provide specific applications. We also describe how the improved bounds are extendable to the multivariate case. Applications include explicit finite sample construction of confidence intervals for a population mean, upper bounds on a tail probability P(X>k) by using the density at k, approximation of P-values, simple bounds on the Riemann Zeta function, on the series , improvement of Minkowski moment inequalities, and construction of simple bounds on the tail probabilities of asymptotically Poisson random variables. We also describe how a game theoretic argument shows that our improved bounds always approximate tail probabilities to any specified degree of accuracy. Received: April 1999  相似文献   

10.
We calculate, by simulations, numerical asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real‐valued parameter b which must be estimated, simple tabulation is not feasible. Partly owing to the presence of this parameter, the choice of model specification for the response surface regressions used to obtain the numerical distribution functions is more involved than is usually the case. We deal with model uncertainty by model averaging rather than by model selection. We make available a computer program which, given the dimension of the problem, q, and a value of b, provides either a set of critical values or the asymptotic P‐value for any value of the likelihood ratio statistic. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
We develop a consumer search model in which consumers may remain uncertain about product quality even after inspecting the product. We first consider the postsearch uncertainty regarding vertical quality, and characterize the separating equilibrium in which firms with different quality levels charge different prices. If quality information is not sufficiently transparent after the search, then prices between the low- and the high-quality products can either diverge or converge as the search cost decreases, depending on the degrees of horizontal and vertical product differentiation. We further extend the model to include the postsearch uncertainty about the horizontal match value and to endogenize the firm's quality choice.  相似文献   

12.
The chi-squared goodness of fit test (also known as the Pearson X2test) is often used to test whether data are consistent with a specified continuous distribution. The value of X2(and hence its associated probability level) can be altered by the choice of (i) the number of classes and (ii) the class probabilities. The effect on the power of the X2test of varying the number of classes when class probabilities are chosen to be equal is investigated.  相似文献   

13.
This paper examines the role of the no-arbitrage condition in financial markets with heterogeneous expectations. We consider a single-period, state-contingent claims model, withM risky securities andS states. There exist two types of heterogeneously informed investors, where the information heterogeneity is defined with respect to either the security payoff matrix, the state probability vector, or state partitions. When the information heterogeneity is defined with respect to either the security payoff matrix or state partitions, the no-arbitrage condition imposes a constraint on the dispersion of information between informed and uninformed investors. Further, the no-arbitrage condition is useful in ascertaining the patterns of heterogeneity among investors that are consistent with equilibrium. However, when the information heterogeneity is defined with respect to state probabilities, the role of the no-arbitrage condition is severely restricted. Finally, the no-arbitrage condition may have important implications for the (necessary and sufficient) conditions for the existence of an equilibrium price vector in financial markets with heterogeneous expectations.  相似文献   

14.
A test statistic is developed for making inference about a block‐diagonal structure of the covariance matrix when the dimensionality p exceeds n, where n = N ? 1 and N denotes the sample size. The suggested procedure extends the complete independence results. Because the classical hypothesis testing methods based on the likelihood ratio degenerate when p > n, the main idea is to turn instead to a distance function between the null and alternative hypotheses. The test statistic is then constructed using a consistent estimator of this function, where consistency is considered in an asymptotic framework that allows p to grow together with n. The suggested statistic is also shown to have an asymptotic normality under the null hypothesis. Some auxiliary results on the moments of products of multivariate normal random vectors and higher‐order moments of the Wishart matrices, which are important for our evaluation of the test statistic, are derived. We perform empirical power analysis for a number of alternative covariance structures.  相似文献   

15.
Bernhard F. Arnold 《Metrika》1996,44(1):119-126
In this paper an approach is presented how to test fuzzily formulated hypotheses with crisp data. The quantitiesα andβ, the probabilities of the errors of type I and of type II, are suitably generalized and the concept of a best test is introduced. Within the framework of a one-parameter exponential distribution family the search for a best test is considerably reduced. Furthermore, it is shown under very weak conditions thatα andβ can simultaneously be diminished by increasing the sample size even in the case of testingH 0 against the omnibus alternativeH 1: notH 0, a result completely different from the case of crisp setsH 0 andH 1: notH 0.  相似文献   

16.
Results on probability integrals of multivariate t distributions are reviewed. The results discussed include: Dunnett and Sobel's probability integrals, Gupta and Sobel's probability integrals, John's probability integrals, Amos and Bulgren's probability integrals, Steffens' non‐central probabilities, Dutt's probability integrals, Amos' probability integral, Fujikoshi's probability integrals, probabilities of cone, probabilities of convex polyhedra, probabilities of linear inequalities, maximum probability content, and Monte Carlo evaluation.  相似文献   

17.
Abstract The credit risk problem is one of the most important issues of modern financial mathematics. Fundamentally it consists in computing the default probability of a company going into debt. The problem can be studied by means of Markov transition models. The generalization of the transition models by means of homogeneous semi-Markov models is presented in this paper. The idea is to consider the credit risk problem as a reliability problem. In a semi-Markov environment it is possible to consider transition probabilities that change as a function of waiting time inside a state. The paper also shows how to apply semi-Markov reliability models in a credit risk environment. In the last section an example of the model is provided. Mathematics Subject Classification (2000): 60K15, 60K20, 90B25, 91B28 Journal of Economic Literature Classification: G21, G33  相似文献   

18.
W. L. Pearn  Chien-Wei Wu 《Metrika》2005,61(2):221-234
Process capability indices have been proposed in the manufacturing industry to provide numerical measures on process reproduction capability, which are effective tools for quality assurance and guidance for process improvement. In process capability analysis, the usual practice for testing capability indices from sample data are based on traditional distribution frequency approach. Bayesian statistical techniques are an alternative to the frequency approach. Shiau, Chiang and Hung (1999) applied Bayesian method to index Cpm and the index Cpk but under the restriction that the process mean μ equals to the midpoint of the two specification limits, m. We note that this restriction is a rather impractical assumption for most factory applications, since in this case Cpk will reduce to Cp. In this paper, we consider testing the most popular capability index Cpk for general situation – no restriction on the process mean based on Bayesian approach. The results obtained are more general and practical for real applications. We derive the posterior probability, p, for which the process under investigation is capable and propose accordingly a Bayesian procedure for capability testing. To make this Bayesian procedure practical for in-plant applications, we tabulate the minimum values of Ĉpk for which the posterior probability p reaches desirable confidence levels with various pre-specified capability levels.  相似文献   

19.
Tournament outcome uncertainty depends on: the design of the tournament; and the relative strengths of the competitors – the competitive balance. A tournament design comprises the arrangement of the individual matches, which we call the tournament structure, the seeding policy and the progression rules. In this paper, we investigate the effect of seeding policy for various tournament structures, while taking account of competitive balance. Our methodology uses tournament outcome uncertainty to consider the effect of seeding policy and other design changes. The tournament outcome uncertainty is measured using the tournament outcome characteristic which is the probability Pq,R that a team in the top 100q pre‐tournament rank percentile progresses forward from round R, for all q and R. We use Monte Carlo simulation to calculate the values of this metric. We find that, in general, seeding favours stronger competitors, but that the degree of favouritism varies with the type of seeding. Reseeding after each round favours the strong to the greatest extent. The ideas in the paper are illustrated using the soccer World Cup Finals tournament.  相似文献   

20.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号