首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper is concerned with the search for locally optimal designs when the observations of the response variable arise from a weighted distribution in the exponential family. Locally optimal designs are derived for regression models in which the response follows a weighted version of Normal, Gamma, Inverse Gaussian, Poisson or Binomial distributions. Some conditions are given under which the optimal designs for the weighted and original (non-weighted) distributions are the same. An efficiency study is performed to find out the behavior of the D-optimal designs for the original distribution when they are used to estimate models with weighted distributions.  相似文献   

2.
T. P. Hutchinson MA  PhD 《Metrika》1981,28(1):263-271
Summary Bivariate distributions, which may be of special relevance to the lifetimes of two components of a system, are derived using the following approach. As the two components are part of one system and therefore exposed to similar conditions of service, there will be similarity between their lifetimes that is not shared by components belonging to different systems. The lifetime distribution for a given system is assumed to be Gamma in form (this includes the exponential as a special case; extension to the Stacey distribution, which includes the Weibull distribution, is straightforward). The scale parameter of this distribution is itself a random variable, with a Gamma distribution. We thus obtain what might be termed a compound Gamma-Gamma bivariate distribution. The cumulative distribution function of this may be expressed in terms of one of the double hypergeometric functions of Appell.Generalised hypergeometric functions play an important part in this paper, and one of Saran's triple hypergeometric functions is obtained when generalising the above model to permit the scale parameters of the distributions for the two components to be correlated, rather than identical.Work started while the author was with the Transport Studies Group, University College London.  相似文献   

3.
Maximum likelihood estimates are obtained for long data sets of bivariate financial returns using mixing representation of the bivariate (skew) Variance Gamma (VG) and two (skew) t distributions. By analysing simulated and real data, issues such as asymptotic lower tail dependence and competitiveness of the three models are illustrated. A brief review of the properties of the models is included. The present paper is a companion to papers in this journal by Demarta & McNeil and Finlay & Seneta.  相似文献   

4.
According to the usual law of small numbers a multivariate Poisson distribution is derived by defining an appropriate model for multivariate Binomial distributions and examining their behaviour for large numbers of trials and small probabilities of marginal and simultaneous successes. The weak limit law is a generalization of Poisson's distribution to larger finite dimensions with arbitrary dependence structure. Compounding this multivariate Poisson distribution by a Gamma distribution results in a multivariate Pascal distribution which is again asymptotically multivariate Poisson. These Pascal distributions contain a class of multivariate geometric distributions. Finally the bivariate Binomial distribution is shown to be the limit law of appropriate bivariate hypergeometric distributions. Proving the limit theorems mentioned here as well as understanding the corresponding limit distributions becomes feasible by using probability generating functions.  相似文献   

5.
《Journal of econometrics》1986,33(3):341-365
This paper explores the specification and testing of some modified count data models. These alternatives permit more flexible specification of the data-generating process (dgp) than do familiar count data models (e.g., the Poisson), and provide a natural means for modeling data that are over- or underdispersed by the standards of the basic models. In the cases considered, the familiar forms of the distributions result as parameter-restricted versions of the proposed modified distributions. Accordingly, score tests of the restrictions that use only the easily-computed ML estimates of the standard models are proposed. The tests proposed by Hausman (1978) and White (1982) are also considered. The tests are then applied to count data models estimated using survey microdata on beverage consumption.  相似文献   

6.
This paper introduces tests for residual serial correlation in cointegrating regressions. The tests are devised in the frequency domain by using the spectral measure estimates. The asymptotic distributions of the tests are derived and test consistency is established. The asymptotic distributions are obtained by using the assumptions and methods that are different from those used in Grenander and Rosenblatt (1957) and Durlauf (1991). Small-scale simulation results are reported to illustrate the finite sample performance of the tests under various distributional assumptions on the data generating process. The distributions considered are normal and t-distributions. The tests are shown to have stable size at sample sizes as large as 50 or 100. Additionally, it is shown that the tests are reasonably powerful against the ARMA residuals. An empirical application of the tests to investigate the ‘weak-form’ efficiency in the foreign exchange market is also reported.  相似文献   

7.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

8.
This paper studies subsampling VAR tests of linear constraints as a way of finding approximations of their finite sample distributions that are valid regardless of the stochastic nature of the data generating processes for the tests. In computing the VAR tests with subsamples (i.e., blocks of consecutive time series), both the tests of the original form and the tests with the subsample OLS coefficient estimates centered at the full-sample estimates are used. Subsampling using the latter is called centered subsampling in this paper. It is shown that the subsamplings provide asymptotic distributions that are equivalent to the asymptotic distributions of the VAR tests. In addition, the tests using critical values from the subsamplings are shown to be consistent. The subsampling methods are applied to testing for causality. To choose the block sizes for subsample causality tests, the minimum volatility method, a new simulation-based calibration rule and a bootstrap-based calibration rule are used. Simulation results in this paper indicate that the centered subsampling using the simulation-based calibration rule for the block size is quite promising. It delivers stable empirical size and reasonably high-powered causality tests. Moreover, when the causality test has a chi-square distribution in the limit, the test using critical values from the centered subsampling has better size properties than the one using chi-square critical values. The centered subsampling using the bootstrap-based calibration rule for the block size also works well, but it is slightly inferior to that using the simulation-based calibration rule.  相似文献   

9.
Inference in Cointegrating Models: UK M1 Revisited   总被引:3,自引:0,他引:3  
The paper addresses the practical determination of cointegration rank. This is difficult for many reasons: deterministic terms play a crucial role in limiting distributions, and systems may not be formulated to ensure similarity to nuisance parameters; finite-sample critical values may differ from asymptotic equivalents; dummy variables alter critical values, often greatly; multiple cointegration vectors must be identified to allow inference; the data may be I(2) rather than I(1), altering distributions; and conditioning must be done with care. These issues are illustrated by an empirical application of multivariate cointegration analysis to a small model of narrow money, prices, output and interest rates in the UK.  相似文献   

10.
Summary
Let be a family of probability distributions on R1. This paper raises the question whether a parameter θ=θ (P), Pt, is estimable on the basis of a type I censored sample (i.e. censored on a fixed set C). Two theorems are given that state conditions on θ and C that ensure that θ is not estimable. The results are applied to estimation problems for the normal and POISSON distributions; it turns out that unbiased estimation is impossible in the majority of practical cases.  相似文献   

11.
The parameters of several families of distributions are estimated by means of minimum χ2; use is made of random samples taken from Dutch income-earning groups in 1973. The numerical search routine used, is the Complex method due to Box. The χ2 function is evaluated by standard numerical integration procedures. The lognormal and the Gamma families are rejected because of a poor fit. The log t and the log Pearson IV families are introduced. This results in a considerable improvement of χ2 critical levels. The generalized Gamma and the Champernowne function describe the income distribution reasonably well in some cases.  相似文献   

12.
Researchers commonly use co-occurrence counts to assess the similarity of objects. This paper illustrates how traditional association measures can lead to misguided significance tests of co-occurrence in settings where the usual multinomial sampling assumptions do not hold. I propose a Monte Carlo permutation test that preserves the original distributions of the co-occurrence data. I illustrate the test on a dataset of organizational categorization, in which I investigate the relations between organizational categories (such as “Argentine restaurants” and “Steakhouses”).  相似文献   

13.
Permutation tests for serial independence using three different statistics based on empirical distributions are proposed. These tests are shown to be consistent under the alternative of m‐dependence and are all simple to perform in practice. A small simulation study demonstrates that the proposed tests have good power in small samples. The tests are then applied to Canadian gross domestic product (GDP data), corroborating the random‐walk hypothesis of GDP growth.  相似文献   

14.
Milan Stehlík 《Metrika》2003,57(2):145-164
The aim of this paper is to give some results on the exact density of the I-divergence in the exponential family with gamma distributed observations. It is shown in particular that the I-divergence can be decomposed as a sum of two independent variables with known distributions. Since the considered I-divergence is related to the likelihood ratio statistics, we apply the method to compute the exact distribution of the likelihood ratio tests and discuss the optimality of such exact tests. One of these tests is the exact LR test of the model which is asymptotically optimal in the Bahadur sense. Numerical examples are provided to illustrate the methods discussed. Received: January 2002 Acknowledgements. I am grateful to Prof. Andrej Pázman for helpful discussions during the setup and the preparation of the paper and to the referees for constructive comments on earlier versions of the paper. Research is supported by the VEGA grant (Slovak Grant Agency) No 1/7295/20  相似文献   

15.
Summary Two random samples of size n are taken from a set containing N objects of H types, first with and then without replacement. Let d be the absolute (L1-)distance and I the K ullback -L eibler information distance between the distributions of the sample compositions without and with replacement. Sample composition is meant with respect to types; it does not matter whether order of sampling is included or not. A bound on I and d is derived, that depends only on n, N, H. The bound on I is not higher than 2 I. For fixed H we have d 0, I 0 as N if and only if n/N 0. Let W r be the epoch at which for the r-th time an object of type I appears. Bounds on the distances between the joint distributions of W 1., W r without and with replacement are given.  相似文献   

16.
Most existing methods for testing cross-sectional dependence in fixed effects panel data models are actually conducting tests for cross-sectional uncorrelation, which are not robust to departures of normality of the error distributions as well as nonlinear cross-sectional dependence. To this end, we construct two rank-based tests for (static and dynamic) fixed effects panel data models, based on two very popular rank correlations, that is, Kendall's tau and Bergsma–Dassios’ τ*, respectively, and derive their asymptotic distributions under the null hypothesis. Monte Carlo simulations demonstrate applicability of these rank-based tests in large (N,T) case, and also the robustness to departures of normality of the error distributions and nonlinear cross-sectional dependence.  相似文献   

17.
Some asymptotic tests for testing distributional assumptions, namely, the half-normal and truncated normal distributions for the stochastic frontier functions have been proposed. The tests are Lagrangean multiplier tests based on the Pearson family of truncated distributions. The statistics can be easily computed. Simple interpretations of the statistics and two empirical examples are provided.  相似文献   

18.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

19.
The breakdown point in its different variants is one of the central notions to quantify the global robustness of a procedure. We propose a simple supplementary variant which is useful in situations where we have no obvious or only partial equivariance: Extending the Donoho and Huber (The notion of breakdown point, Wadsworth, Belmont, 1983) Finite Sample Breakdown Point?, we propose the Expected Finite Sample Breakdown Point to produce less configuration-dependent values while still preserving the finite sample aspect of the former definition. We apply this notion for joint estimation of scale and shape (with only scale-equivariance available), exemplified for generalized Pareto, generalized extreme value, Weibull, and Gamma distributions. In these settings, we are interested in highly-robust, easy-to-compute initial estimators; to this end we study Pickands-type and Location-Dispersion-type estimators and compute their respective breakdown points.  相似文献   

20.
In latent theory the measurement properties of a mental test can be expressed in the test information function. The relative merits of two tests for the same latent trait can be described by the relative efficiency function, i.e. the ratio of the test information functions. It is argued that these functions have to be estimated if the values of the item difficulties are unknown. Using conditional maximum likelihood estimation as indicated by Andersen (1973), pointwise asymptotic distributions of the test information and relative efficiency function are derived for the case of dichotomously scored Rasch homogeneous items. Formulas for confidence intervals are derived from the asymptotic distributions. An application to a mathematics test is given and extensions to other latent trait models are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号