首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
P. A. Lee  S. H. Ong 《Metrika》1986,33(1):1-28
Summary Four bivariate generalisations (Type I–IV) of the non-central negative binomial distribution (Ong/Lee) are considered. The Type I generalisation is constructed using the latent structure model scheme (Goodman) while the Type II generalisation arises from a variation of this scheme. The Type III generalisation is formed by using the method of random elements in common (Mardia). The Type IV is an extension of the Type I generalisation. Properties of these bivariate distributions including joint central and factorial moments are discussed; several recurrence formulae of the probabilities are given. An application to the childhood accident data of Mellinger et al. is considered with the precision of the Type I maximum likelihood estimates computed.  相似文献   

2.
S. H. Ong 《Metrika》1987,34(1):225-236
Summary In this paper we shall record some facts and further examine certain properties of the non-central negative binomial (NNB) distribution (Laguerre series distribution of Gurland, Chen and Hernandez 1983). We consider, among others, a stochastic formulation (birth-and-death process), the series expansion of the probability distribution and the corresponding series expansion of a generalized exponential distribution (Ong/Lee 1986), and the connection of the NNB distribution with the non-central beta, gamma and non-central gamma distributions. A four-parameter version of the NNB distribution is also presented.  相似文献   

3.
The decision of households regarding housing consumption should be viewed as decisions within an overall housing career. A model of household residential mobility is derived which may be empirically investigated through a generalisation of competing risk analysis. It is explained how the effects of omitted variables may be dealt with by means of a non-parametric characterisation of their multivariate distribution within a marginal likelihood framework. The problem of initial conditions is discussed in relation to the design of the analysis. The model is applied to the residence histories of a sample of households from the Michigan Panel Study of Income Dynamics  相似文献   

4.
It has remained an open question as to whether the results of Milgrom–Weber [Milgrom, P.R., Weber, R.J., 1985. Distributional strategies for games with incomplete information. Mathematics of Operations Research 10, 619–632] are valid for action sets with a countably infinite number of elements without additional assumptions on the abstract measure space of information. In this paper, we give an affirmative answer to this question as a consequence of an extension of a theorem of Dvoretzky, Wald and Wolfowitz (henceforth DWW) due to Edwards [Edwards, D.A., 1987. On a theorem of Dvoretsky, Wald and Wolfowitz concerning Liapunov measures. Glasgow Mathematical Journal 29, 205–220]. We also present a direct elementary proof of the DWW theorem and its extension, one that may have an independent interest.  相似文献   

5.
Summary We derive the detailed correlation structure for the simple “staircase model”: a process where white noise is superimposed on a deterministic step function that has equal rises and equal treads. It turns out that this structure is an immediate generalisation of that for a linear trend (which, for discrete data, can be alternatively considered as a step function with equal rises and unit treads). We compare the structure obtained with that for a random walk, and those for a subset of other ARIMA(p, 1,q) models, and those of general ARIMA(p, d, q) processes withd>1.  相似文献   

6.
The Dirichlet‐multinomial process can be seen as the generalisation of the binomial model with beta prior distribution when the number of categories is larger than two. In such a scenario, setting informative prior distributions when the number of categories is great becomes difficult, so the need for an objective approach arises. However, what does objective mean in the Dirichlet‐multinomial process? To deal with this question, we study the sensitivity of the posterior distribution to the choice of an objective Dirichlet prior from those presented in the available literature. We illustrate the impact of the selection of the prior distribution in several scenarios and discuss the most sensible ones.  相似文献   

7.
Non-negative matrix factorisation (NMF) is an increasingly popular unsupervised learning method. However, parameter estimation in the NMF model is a difficult high-dimensional optimisation problem. We consider algorithms of the alternating least squares type. Solutions to the least squares problem fall in two categories. The first category is iterative algorithms, which include algorithms such as the majorise–minimise (MM) algorithm, coordinate descent, gradient descent and the Févotte-Cemgil expectation–maximisation (FC-EM) algorithm. We introduce a new family of iterative updates based on a generalisation of the FC-EM algorithm. The coordinate descent, gradient descent and FC-EM algorithms are special cases of this new EM family of iterative procedures. Curiously, we show that the MM algorithm is never a member of our general EM algorithm. The second category is based on cone projection. We describe and prove a cone projection algorithm tailored to the non-negative least square problem. We compare the algorithms on a test case and on the problem of identifying mutational signatures in human cancer. We generally find that cone projection is an attractive choice. Furthermore, in the cancer application, we find that a mix-and-match strategy performs better than running each algorithm in isolation.  相似文献   

8.
In this article, we provide a link between the Shapley value in cooperative game theory and the capital asset pricing model (CAPM) in finance. In particular, the Shapley value of a suitably defined cooperative game is closely related to the beta factor in the CAPM. The beta factor for any given security may be interpreted as the asset’s fairly allocated share of the market risk or as the asset’s average marginal contribution to the market risk, respectively. Other fairness properties and axioms of the Shapley value may be reinterpreted in this context to attain a deeper understanding of the beta factor and the connotation of systematic risk. Our game theoretic approach further allows for a generalisation of the CAPM with respect to arbitrary risk measures other than variance. Last but not least, we discuss the volatility of an asset’s theoretical fair assessment of risk and of its systematic risk, respectively. This result lends itself to face the challenge of an empirical investigation on real stock markets.  相似文献   

9.
The innovations representation for a local linear trend can adapt to long run secular and short term transitory effects in the data. This is illustrated by the theoretical power spectrum for the model which may possess considerable power at frequencies that might be associated with cycles of several years' duration. Whilst advantageous for short term forecasting, the model may be of less use when interest is in the underlying long run trend in the data. In this paper we propose a generalisation of the innovations representation for a local linear trend that is appropriate for representing short, medium and long run trends in the data.  相似文献   

10.
Quality management practitioners in the USA such as W. Edwards Deming, strongly advocate the Japanese model of supplier relationships-recommending substantial specific investment in a single supplier for improved co-ordination and higher quality. But the strategy literature and conventional wisdom favor multiple sourcing, suggesting that a high level of specific investment in a sole source will lead to problems with supplier performance. Using agency theory, we construct a model to evaluate the tradeoff between the costs to set up and coordinate with suppliers and the incentive for performance provided by competition. We find that the validity of Deming's Point Four, that sole sourcing is more profitable than competitive sourcing, depends on parameters such as profit sensitivity to supplier performance.  相似文献   

11.
Book Reviews     
Books review in this article: The Control of Industrial Relations In Large Companies: an Initial Analysis of the Second Company Level Industrial Relations Survey (Warwick Papers In Industrial Relations No.45) Paul Marginson, Peter Armstrong, Paul Edwards and John Purcell with Nancy Hubbard  相似文献   

12.
A bstract . M. Reynolds and M. Edwards , commenting on R. J. Cebula's study of geographic differences in living costs in states with Right-to-Work Laws , seek to extend his results and explore the relevance of alternative variables. Cebula, in reply, addresses the comment and re-estimates the living cost impact of such laws, taking into account additional factors. Even after allowing for additional South/non-South differences , Cebula reports, the original basic model is resilient.  相似文献   

13.
A method, which we believe is simpler and more transparent than the one due to McCullagh (1984) , is described for obtaining the cumulants of a scalar multivariate stochastic Taylor expansion. Its generalisation is also suggested. An important feature, previously not reported, is that the expansion of every cumulant of order ≥ 2 is made up of separate subseries.
In order to handle certain frequently occurring sums over permutations of members of compound index sets, we introduce a new notation  [ m ]*,  where   m   is a positive integer.  相似文献   

14.
Quality &; Quantity - Lindner’s (Psychologische Beiträge 26:393–415, 1984) test is a generalisation of Fisher’s exact test for 2 × 2 contingency tables to 2 k...  相似文献   

15.
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k?1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k?1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k?1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.  相似文献   

16.
17.
Abstract

In this paper, I show a generalisation of the negative relation of traditional accruals and percent accruals with future returns in 11 of 16 European countries. Positive abnormal returns from hedge portfolios on both accrual measures summarise the economic significance of this generalisation. The magnitude of returns obtained from traditional accruals is higher than that obtained from percent accruals, contrary to existing evidence from the U.S. capital market. The magnitude of the accrual effect on stock returns based on both accrual measures is stronger in countries with higher individualism, lower uncertainty avoidance, higher equity-market development, higher equity-market liquidity, lower transaction costs, higher analyst coverage, lower analyst optimism, and lower ownership concentration. In markets where minorities have legal protection against expropriation by corporate insiders and where accrual accounting is permitted, the accrual effect based only on percent accruals is positive. Earnings opacity does not appear to exhibit a significant influence. Overall, the evidence suggests that cross-country differences in culture, equity-market setting, analysts' research output, investor protection, and ownership structure play an important role in explaining variation on the magnitude of the accrual anomaly in Europe.  相似文献   

18.
Summary The well known probability distribution of first arrival times of a particle undergoing random walk or Brownian movement in one dimension is extended to allow for steps in series each in a different medium. Previously this led to considering a certain distribution defined by its cumulants, which form a simple series generalising that for the known distribution. This is illustrated by the particular case of two first passages in series. Approximations to the probability (density) curves are found, each of which consists of a sharp peak followed by a long tail where the ordinates are very nearly proportional to t-W, W ≥ 3/2. A generalisation can yield smaller W down to ca. 0.1. It is concluded that this explains why negative powers of time are found in so many physiological clearance curves of all kinds. Numerical tables are based on distributions with very smal step times, and give ones built up from the sum of a varying number of steps. The parameters 01 the triangle formed by the inflection tangents are given in order to describe the peaks.  相似文献   

19.
The Borda rule, originally defined on profiles of individual preferences modelled as linear orders over the set of alternatives, is one of the most important voting rules. But voting rules often need to be used on preferences of a different format as well, such as top-truncated orders, where agents rank just their most preferred alternatives. What is the right generalisation of the Borda rule to such richer models of preference? Several suggestions have been made in the literature, typically considering specific contexts where the rule is to be applied. In this work, taking an axiomatic perspective, we conduct a principled analysis of the different options for defining the Borda rule on top-truncated preferences.  相似文献   

20.
Summary A decision process is considered which consists of two steps: First, a nullhypothesis H0 is to be tested. If H0 is rejected, a decision is to be made as to which of the alternative hypotheses H1, H2, ... H k is valid. This second step is called "classification". It is assumed, that in case H0 is not valid, each of the alternative hypotheses H1, H2, ... H k has the same probability. Starting with this assumption, an optimal decision process is developed which has a specified level of significance α (i.e. by which the nullhypothesis H0 is accepted with probability α, if it is valid), and for which the probability of a correct classification is a maximum in the case where the nullhypothesis is not valid. This decision process rests on a generalisation of the fundamental lemma of Neyman and Pearson, similar to that used in discriminant analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号