首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Starting from the pioneering works of Shannon and Weiner in 1948, a plethora of works have been reported on entropy in different directions. Entropy‐related review work in the direction of statistical inference, to the best of our knowledge, has not been reported so far. Here, we have tried to collect all possible works in this direction during the last seven decades so that people interested in entropy, specially the new researchers, get benefited.  相似文献   

2.
    
This paper concerns a class of model selection criteria based on cross‐validation techniques and estimative predictive densities. Both the simple or leave‐one‐out and the multifold or leave‐m‐out cross‐validation procedures are considered. These cross‐validation criteria define suitable estimators for the expected Kullback–Liebler risk, which measures the expected discrepancy between the fitted candidate model and the true one. In particular, we shall investigate the potential bias of these estimators, under alternative asymptotic regimes for m. The results are obtained within the general context of independent, but not necessarily identically distributed, observations and by assuming that the candidate model may not contain the true distribution. An application to the class of normal regression models is also presented, and simulation results are obtained in order to gain some further understanding on the behavior of the estimators.  相似文献   

3.
    
This paper presents a careful investigation of the three popular calibration weighting methods: (i) generalised regression; (ii) generalised exponential tilting and (iii) generalised pseudo empirical likelihood, with a major focus on computational aspects of the methods and some empirical evidences on calibrated weights. We also propose a simple weight trimming method for range‐restricted calibration. The finite sample behaviour of the weights obtained by the three calibration weighting methods and the effectiveness of the proposed weight trimming method are examined through limited simulation studies.  相似文献   

4.
    
We offer an exposition of modern higher order likelihood inference and introduce software to implement this in a quite general setting. The aim is to make more accessible an important development in statistical theory and practice. The software, implemented in an R package, requires only that the user provide code to compute the likelihood function and to specify extra‐likelihood aspects of the model, such as stopping rule or censoring model, through a function generating a dataset under the model. The exposition charts a narrow course through the developments, intending thereby to make these more widely accessible. It includes the likelihood ratio approximation to the distribution of the maximum likelihood estimator, that is the p? formula, and the transformation of this yielding a second‐order approximation to the distribution of the signed likelihood ratio test statistic, based on a modified signed likelihood ratio statistic r?. This follows developments of Barndorff‐Nielsen and others. The software utilises the approximation to required Jacobians as developed by Skovgaard, which is included in the exposition. Several examples of using the software are provided.  相似文献   

5.
    
This paper considers the location‐scale quantile autoregression in which the location and scale parameters are subject to regime shifts. The regime changes in lower and upper tails are determined by the outcome of a latent, discrete‐state Markov process. The new method provides direct inference and estimate for different parts of a non‐stationary time series distribution. Bayesian inference for switching regimes within a quantile, via a three‐parameter asymmetric Laplace distribution, is adapted and designed for parameter estimation. Using the Bayesian output, the marginal likelihood is readily available for testing the presence and the number of regimes. The simulation study shows that the predictability of regimes and conditional quantiles by using asymmetric Laplace distribution as the likelihood is fairly comparable with the true model distributions. However, ignoring that autoregressive coefficients might be quantile dependent leads to substantial bias in both regime inference and quantile prediction. The potential of this new approach is illustrated in the empirical applications to the US inflation and real exchange rates for asymmetric dynamics and the S&P 500 index returns of different frequencies for financial market risk assessment.  相似文献   

6.
    
In recent years, we have seen an increased interest in the penalized likelihood methodology, which can be efficiently used for shrinkage and selection purposes. This strategy can also result in unbiased, sparse, and continuous estimators. However, the performance of the penalized likelihood approach depends on the proper choice of the regularization parameter. Therefore, it is important to select it appropriately. To this end, the generalized cross‐validation method is commonly used. In this article, we firstly propose new estimates of the norm of the error in the generalized linear models framework, through the use of Kantorovich inequalities. Then these estimates are used in order to derive a tuning parameter selector in penalized generalized linear models. The proposed method does not depend on resampling as the standard methods and therefore results in a considerable gain in computational time while producing improved results. A thorough simulation study is conducted to support theoretical findings; and a comparison of the penalized methods with the L1, the hard thresholding, and the smoothly clipped absolute deviation penalty functions is performed, for the cases of penalized Logistic regression and penalized Poisson regression. A real data example is being analyzed, and a discussion follows. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

7.
    
Over the last decades, several methods for selecting the bandwidth have been introduced in kernel regression. They differ quite a bit, and although there already exist more selection methods than for any other regression smoother, one can still observe coming up new ones. Given the need of automatic data‐driven bandwidth selectors for applied statistics, this review is intended to explain and, above all, compare these methods. About 20 different selection methods have been revised, implemented and compared in an extensive simulation study.  相似文献   

8.
    
While the likelihood ratio measures statistical support for an alternative hypothesis about a single parameter value, it is undefined for an alternative hypothesis that is composite in the sense that it corresponds to multiple parameter values. Regarding the parameter of interest as a random variable enables measuring support for a composite alternative hypothesis without requiring the elicitation or estimation of a prior distribution, as described below. In this setting, in which parameter randomness represents variability rather than uncertainty, the ideal measure of the support for one hypothesis over another is the difference in the posterior and prior log‐odds. That ideal support may be replaced by any measure of support that, on a per‐observation basis, is asymptotically unbiased as a predictor of the ideal support. Such measures of support are easily interpreted and, if desired, can be combined with any specified or estimated prior probability of the null hypothesis. Two qualifying measures of support are minimax‐optimal. An application to proteomics data indicates that a modification of optimal support computed from data for a single protein can closely approximate the estimated difference in posterior and prior odds that would be available with the data for 20 proteins.  相似文献   

9.
    
This paper is concerned with the construction of prior probability measures for parametric families of densities where the framework is such that only beliefs or knowledge about a single observable data point is required. We pay particular attention to the parameter which minimizes a measure of divergence to the distribution providing the data. The prior distribution reflects this attention and we discuss the application of the Bayes rule from this perspective. Our framework is fundamentally non‐parametric and we are able to interpret prior distributions on the parameter space using ideas of matching loss functions, one of which is coming from the data model and the other from the prior.  相似文献   

10.
    
In this paper, we consider portmanteau tests for testing the adequacy of multiplicative seasonal autoregressive moving‐average models under the assumption that the errors are uncorrelated but not necessarily independent. We relax the standard independence assumption on the error terms in order to extend the range of applications of the seasonal autoregressive moving‐average models. We study the asymptotic distributions of residual and normalized residual empirical autocovariances and autocorrelations under weak assumptions on noise. We establish the asymptotic behavior of the proposed statistics. A set of Monte Carlo experiments and an application to monthly mean total sunspot number are presented.  相似文献   

11.
    
Incomplete correlated 2 × 2 tables are common in some infectious disease studies and two‐step treatment studies in which one of the comparative measures of interest is the risk ratio (RR). This paper investigates the two‐stage tests of whether K RRs are homogeneous and whether the common RR equals a freewill constant. On the assumption that K RRs are equal, this paper proposes four asymptotic test statistics: the Wald‐type, the logarithmic‐transformation‐based, the score‐type and the likelihood ratio statistics to test whether the common RR equals a prespecified value. Sample size formulae based on hypothesis testing method and confidence interval method are proposed in the second stage of test. Simulation results show that sample sizes based on the score‐type test and the logarithmic‐transformation‐based test are more accurate to achieve the predesigned power than those based on the Wald‐type test. The score‐type test performs best of the four tests in terms of type I error rate. A real example is used to illustrate the proposed methods.  相似文献   

12.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored.  相似文献   

13.
    
Statistical theory aims to provide a foundation for studying the collection and interpretation of data, a foundation that does not depend on the particular details of the substantive field in which the data are being considered. This gives a systematic way to approach new problems, and a common language for summarising results; ideally, the foundations and common language ensure that statistical aspects of one study, or of several studies on closely related phenomena, can be broadly accessible. We discuss some principles of statistical inference, to outline how these are, or could be, used to inform the interpretation of results, and to provide a greater degree of coherence for the foundations of statistics.  相似文献   

14.
    
This paper discusses some simple practical advantages of Markov chain Monte Carlo (MCMC) methods in estimating entry and exit transition probabilities from repeated independent surveys. Simulated data are used to illustrate the usefulness of MCMC methods when the likelihood function has multiple local maxima. Actual data on the evaluation of an HIV prevention intervention program among drug users are used to demonstrate the advantage of using prior information to enhance parameter identificaiton. The latter example also demonstrates an important strength of the MCMC approach, namely the ability to make inferences on arbitrary functions of model parameters.  相似文献   

15.
    
Two‐state models (working/failed or alive/dead) are widely used in reliability and survival analysis. In contrast, multi‐state stochastic processes provide a richer framework for modeling and analyzing the progression of a process from an initial to a terminal state, allowing incorporation of more details of the process mechanism. We review multi‐state models, focusing on time‐homogeneous semi‐Markov processes (SMPs), and then describe the statistical flowgraph framework, which comprises analysis methods and algorithms for computing quantities of interest such as the distribution of first passage times to a terminal state. These algorithms algebraically combine integral transforms of the waiting time distributions in each state and invert them to get the required results. The estimated transforms may be based on parametric distributions or on empirical distributions of sample transition data, which may be censored. The methods are illustrated with several applications.  相似文献   

16.
    
This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson-Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.  相似文献   

17.
    
This article develops influence diagnostics for log‐Birnbaum–Saunders (LBS) regression models with censored data based on case‐deletion model (CDM). The one‐step approximations of the estimates in CDM are given and case‐deletion measures are obtained. Meanwhile, it is shown that CDM is equivalent to mean shift outlier model (MSOM) in LBS regression models and an outlier test is presented based on MSOM. Furthermore, we discuss a score test for homogeneity of shape parameter in LBS regression models. Two numerical examples are given to illustrate our methodology and the properties of score test statistic are investigated through Monte Carlo simulations under different censoring percentages.  相似文献   

18.
    
The asymptotic approach and Fisher's exact approach have often been used for testing the association between two dichotomous variables. The asymptotic approach may be appropriate to use in large samples but is often criticized for being associated with unacceptable high actual type I error rates for small to medium sample sizes. Fisher's exact approach suffers from conservative type I error rates and low power. For these reasons, a number of exact unconditional approaches have been proposed, which have been seen to be generally more powerful than exact conditional counterparts. We consider the traditional unconditional approach based on maximization and compare it to our presented approach, which is based on estimation and maximization. We extend the unconditional approach based on estimation and maximization to designs with the total sum fixed. The procedures based on the Pearson chi‐square, Yates's corrected, and likelihood ratio test statistics are evaluated with regard to actual type I error rates and powers. A real example is used to illustrate the various testing procedures. The unconditional approach based on estimation and maximization performs well, having an actual level much closer to the nominal level. The Pearson chi‐square and likelihood ratio test statistics work well with this efficient unconditional approach. This approach is generally more powerful than the other p‐value calculation methods in the scenarios considered.  相似文献   

19.
    
Permutation tests for serial independence using three different statistics based on empirical distributions are proposed. These tests are shown to be consistent under the alternative of m‐dependence and are all simple to perform in practice. A small simulation study demonstrates that the proposed tests have good power in small samples. The tests are then applied to Canadian gross domestic product (GDP data), corroborating the random‐walk hypothesis of GDP growth.  相似文献   

20.
    
The purpose of this study is to examine the relationship between high‐performance work systems (HPWS) and the work–family interface (i.e. work–family conflict (WFC) and work–family facilitation (WFF)) in a Chinese context. We used job autonomy and self‐efficacy as an underlying mechanism for describing the relationship between HPWS and the work–family interface. Using data from 152 HR managers and 1324 employees, we found that the HPWS was positively associated with both job autonomy and self‐efficacy. We observed that self‐efficacy was an important mechanism to explain the relationship between HPWS and WFF and WFC. We also observed that job autonomy mediated the relationship between HPWS and WFF, but its presence was not significant between HPWS and WFC. One unique contribution of the study is that the authors extended the job demands–resources model to Chinese employees, confirming that self‐efficacy is an important mechanism linking HPWS with WFC and WFF. Practical implications and future research directions are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号