首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The t Copula and Related Copulas   总被引:13,自引:0,他引:13  
The t copula and its properties are described with a focus on issues related to the dependence of extreme values. The Gaussian mixture representation of a multivariate t distribution is used as a starting point to construct two new copulas, the skewed t copula and the grouped t copula, which allow more heterogeneity in the modelling of dependent observations. Extreme value considerations are used to derive two further new copulas: the t extreme value copula is the limiting copula of componentwise maxima of t distributed random vectors; the t lower tail copula is the limiting copula of bivariate observations from a t distribution that are conditioned to lie below some joint threshold that is progressively lowered. Both these copulas may be approximated for practical purposes by simpler, better-known copulas, these being the Gumbel and Clayton copulas respectively.  相似文献   

2.
The brown rat lives with man in a wide variety of environmental contexts and adversely affects public health by transmission of diseases, bites, and allergies. Understanding behavioral and spatial correlation aspects of pest species can contribute to their effective management and control. Rat sightings can be described by spatial coordinates in a particular region of interest defining a spatial point pattern. In this paper, we investigate the spatial structure of rat sightings in the Latina district of Madrid (Spain) and its relation to a number of distance‐based covariates that relate to the proliferation of rats. Given a number of locations, biologically considered as attractor points, the spatial dependence is modeled by distance‐based covariates and angular orientations through copula functions. We build a particular spatial trivariate distribution using univariate margins coming from the covariate information and provide predictive distributions for such distances and angular orientations.  相似文献   

3.
4.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

5.
We examine the conditions under which each individual series that is generated by a vector autoregressive model can be represented as an autoregressive model that is augmented with the lags of a few linear combinations of all the variables in the system. We call this multivariate index-augmented autoregression (MIAAR) modelling. We show that the parameters of the MIAAR can be estimated by a switching algorithm that increases the Gaussian likelihood at each iteration. Since maximum likelihood estimation may perform poorly when the number of parameters increases, we propose a regularized version of our algorithm for handling a medium–large number of time series. We illustrate the usefulness of the MIAAR modelling by both empirical applications and simulations.  相似文献   

6.
We develop a generalized method of moments (GMM) estimator for the distribution of a variable where summary statistics are available only for intervals of the random variable. Without individual data, one cannot calculate the weighting matrix for the GMM estimator. Instead, we propose a simulated weighting matrix based on a first-step consistent estimate. When the functional form of the underlying distribution is unknown, we estimate it using a simple yet flexible maximum entropy density. Our Monte Carlo simulations show that the proposed maximum entropy density is able to approximate various distributions extremely well. The two-step GMM estimator with a simulated weighting matrix improves the efficiency of the one-step GMM considerably. We use this method to estimate the U.S. income distribution and compare these results with those based on the underlying raw income data.  相似文献   

7.
In the context of regularly varying tails, we first analyze a generalization of the classical Hill estimator of a positive tail index, with members that are not asymptotically more efficient than the original one. This has led us to propose alternative classical tail index estimators, that may perform asymptotically better than the Hill estimator. As the improvement is not really significant, we also propose generalized jackknife estimators based on any two members of these two classes. These generalized jackknife estimators are compared with the Hill estimator and other reduced-bias estimators available in the literature, asymptotically, and for finite samples, through the use of Monte Carlo simulation. The finite-sample behaviour of the new reduced-bias estimators is also illustrated through a practical example in the field of finance.  相似文献   

8.
Practical estimation of multivariate densities using wavelet methods   总被引:2,自引:0,他引:2  
This paper describes a practical method for estimating multivariate densities using wavelets. As in kernel methods, wavelet methods depend on two types of parameters. On the one hand we have a functional parameter: the wavelet Ø (comparable to the kernel K ) and on the other hand we have a smoothing parameter: the resolution index (comparable to the bandwidth h ). Classically, we determine the resolution index with a cross-validation method. The advantage of wavelet methods compared to kernel methods is that we have a technique for choosing the wavelet Ø among a fixed family. Moreover, the wavelets method simplifies significantly both the theoretical and the practical computations.  相似文献   

9.
It is shown how to implement an EM algorithm for maximum likelihood estimation of hierarchical nonlinear models for data sets consisting of more than two levels of nesting. This upward–downward algorithm makes use of the conditional independence assumptions implied by the hierarchical model. It cannot only be used for the estimation of models with a parametric specification of the random effects, but also to extend the two-level nonparametric approach – sometimes referred to as latent class regression – to three or more levels. The proposed approach is illustrated with an empirical application.  相似文献   

10.
The known methods for computing percentage points of multivariate t distributions are reviewed. We believe that this review will serve as an important reference and encourage further research activities in the area.  相似文献   

11.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

12.
Let X = (X 1,...,X n ) be a sample from an unknown cumulative distribution function F defined on the real line . The problem of estimating the cumulative distribution function F is considered using a decision theoretic approach. No assumptions are imposed on the unknown function F. A general method of finding a minimax estimator d(t;X) of F under the loss function of a general form is presented. The method of solution is based on converting the nonparametric problem of searching for minimax estimators of a distribution function to the parametric problem of searching for minimax estimators of the probability of success for a binomial distribution. The solution uses also the completeness property of the class of monotone decision procedures in a monotone decision problem. Some special cases of the underlying problem are considered in the situation when the loss function in the nonparametric problem is defined by a weighted squared, LINEX or a weighted absolute error.  相似文献   

13.
This paper assesses the effects of autocorrelation on parameter estimates of affine term structure models (ATSM) when principal components analysis is used to extract factors. In contrast to recent studies, we design and run a Monte Carlo experiment that relies on the construction of a simulation design that is consistent with the data, rather than theory or observation, and find that parameter estimation from ATSM is precise in the presence of serial correlation in the measurement error term. Our findings show that parameter estimation of ATSM with principal component based factors is robust to autocorrelation misspecification.  相似文献   

14.
Reforming the healthcare delivery system to provide optimum care to sick newborn infants is a critical task in Korea. Motivated by the efforts of the Korean government, we study a capacity allocation model to design an optimal capacity allocation plan for neonatal intensive care units (NICUs). Our model considers the following properties: 1) the hierarchical feature of neonatal care services and 2) the congestion effect in NICU operations. We develop a mathematical model that combines a hierarchical location model with queuing theory. We subsequently apply the proposed model to the problem of allocating capacities to NICUs in Korea. We provide information that can help policymakers draw an initial plan by evaluating various capacity allocation scenarios. We further examine two policy alternatives for improving accessibility to neonatal care. One involves increasing service capacity by adaptively adding resources to NICUs, and the other includes expanding physical service coverage by introducing helicopter transport. The results show that each alternative can contribute toward improving accessibility, and we believe that these findings will have practical implications for developing a better neonatal care system.  相似文献   

15.
Chunsheng Ma 《Metrika》1996,44(1):71-83
Under the assumption that the products of multivariate mean remaining lives and hazard rates are the same constant, it is shown that the corresponding multivariate survival function belongs to one of three families: (1) multivariate Gumbel exponential distribution; (2) multivariate Lomax (Pareto type II) distribution; (3) multivariate rescaled Dirichlet distribution. This result is then used to derive another characterization of the latter two families based on the residual life distribution.  相似文献   

16.
In nonparametric estimation of functionals of a distribution, it may or may not be desirable, or indeed necessary, to introduce a degree of smoothing into this estimation. In this article, I describe a method for assessing, with just a little thought about the functional of interest, (i) whether smoothing is likely to prove worthwhile, and (ii) if so, roughly how much smoothing is appropriate (in order-of-magnitude terms). This rule-of-thumb is not guaranteed to be accurate nor does it give a complete answer to the smoothing problem. However, I have found it very useful over a number of years; many examples of its use, and limitations, are given.  相似文献   

17.
We consider nonparametric estimation of multivariate versions of Blomqvist’s beta, also known as the medial correlation coefficient. For a two-dimensional population, the sample version of Blomqvist’s beta describes the proportion of data which fall into the first or third quadrant of a two-way contingency table with cutting points being the sample medians. Asymptotic normality and strong consistency of the estimators are established by means of the empirical copula process, imposing weak conditions on the copula. Though the asymptotic variance takes a complicated form, we are able to derive explicit formulas for large families of copulas. For the copulas of elliptically contoured distributions we obtain a variance stabilizing transformation which is similar to Fisher’s z-transformation. This allows for an explicit construction of asymptotic confidence bands used for hypothesis testing and eases the analysis of asymptotic efficiency. The computational complexity of estimating Blomqvist’s beta corresponds to the sample size n, which is lower than the complexity of most competing dependence measures.   相似文献   

18.
In this paper, we discuss stochastic comparison of the largest order statistics arising from two sets of dependent distribution-free random variables with respect to multivariate chain majorization, where the dependency structure can be defined by Archimedean copulas. When a distribution-free model with possibly two parameter vectors has its matrix of parameters changing to another matrix of parameters in a certain mathematical sense, we obtain the first sample maxima is larger than the second sample maxima with respect to the usual stochastic order, based on certain conditions. Applications of our results for scale proportional reverse hazards model, exponentiated gamma distribution, Gompertz–Makeham distribution, and location-scale model, are also given. Meanwhile, we provide two numerical examples to illustrate the results established here.  相似文献   

19.
Based on linear combinations of intermediate order statistics, we introduce a new class of estimators for the exponent of a distribution function F with a regularly varying upper tail. We prove asymptotic normality and we make a comparison with existing proposals using the mean squared error as criterion.  相似文献   

20.
The efficient flow of goods and services involves addressing multilevel forecast questions, and careful consideration when aggregating or disaggregating hierarchical estimates. Assessing all possible aggregation alternatives helps to determine the statistically most accurate way of consolidating multilevel forecasts. However, doing so in a multilevel and multiproduct supply chain may prove to be a very computationally intensive and time-consuming task. In this paper, we present a new, two-level oblique linear discriminant tree model, which identifies the optimal hierarchical forecast technique for a given hierarchical database in a very time-efficient manner. We induced our model from a real-world dataset, and it separates all historical time series into the four aggregation mechanisms considered. The separation process is a function of both the positive and negative correlation groups' variances at the lowest level of the hierarchical datasets. Our primary contributions are: (1) establishing a clear-cut relationship between the correlation metrics at the lowest level of the hierarchy and the optimal aggregation mechanism for a product/service hierarchy, and (2) developing an analytical model for personalized forecast aggregation decisions, based on characteristics of a hierarchical dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号