首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Fisher and "Student" quarreled in the early days of statistics about the design of experiments, meant to measure the difference in yield between to breeds of corn. This discussion comes down to randomization versus model building. More than half a century has passed since, but the different views remain. In this paper the discussion is put in terms of artificial randomization and natural randomization, the latter being what remains after appropriate modeling. Also the Bayesian position is discussed. An example in terms of the old corn-breeding discussion is given, showing that a simple robust model may lead to inference and experimental design that outperforms the inference from randomized experiments by far. Finally similar possibilities are suggested in statistical auditing.  相似文献   

2.
Zellner (1976) proposed a regression model in which the data vector (or the error vector) is represented as a realization from the multivariate Student t distribution. This model has attracted considerable attention because it seems to broaden the usual Gaussian assumption to allow for heavier-tailed error distributions. A number of results in the literature indicate that the standard inference procedures for the Gaussian model remain appropriate under the broader distributional assumption, leading to claims of robustness of the standard methods. We show that, although mathematically the two models are different, for purposes of statistical inference they are indistinguishable. The empirical implications of the multivariate t model are precisely the same as those of the Gaussian model. Hence the suggestion of a broader distributional representation of the data is spurious, and the claims of robustness are misleading. These conclusions are reached from both frequentist and Bayesian perspectives.  相似文献   

3.
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is so attractive and useful, however, that it should be pursued. Our approach is In line with Walds theory of statistical decision functions and with Lehmann's books about hypothesis testing and point estimation: loss functions are defined, risk functions are studied, unbiasedness and equivariance restrictions are made, etc. A central theme is that the loss function should be 'proper'. This fundamental concept has been explored by meteorologists, psychometrists, Bayesian statisticians, and others. The paper should be regarded as an attempt to reconcile various schools of statisticians. By accepting what we regard 88 good and useful in the various approaches we are trying to develop a nondogmatic approach.  相似文献   

4.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

5.
Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either directly on deep parameters' values (‘microprior’) or indirectly, on macroeconomic indicators, e.g. moments of observable variables (‘macroprior’). We introduce a non-parametric macroprior which is elicited from impulse response functions and assess its performance in shaping posterior estimates. We find that using a macroprior can lead to substantially different posterior estimates. We probe into the details of our result, showing that model misspecification is likely to be responsible of that. In addition, we assess to what extent the use of macropriors is impaired by the need of calibrating some hyperparameters.  相似文献   

6.
张丽丽 《价值工程》2011,30(30):211-211
本文通过两个具体例题表明当连续型总体可能的取值范围不是(-∞,+∞)时,利用第一种似然函数的定义解决点估计问题,学生不仅能够很容易地掌握最大似然估计法,同时对样本来自总体且估计离不开样本这一统计思想加深了理解。  相似文献   

7.
The use of stochastic models and performance measures for the analysis of real life queuing scenarios are based on the fundamental premise that parameters values are known. This is a rarity since more often than not, parameters are usually unknown and require to be estimated. This paper presents techniques for the same from Bayesian perspective. The queue we intend to deal with is the M/M/1 queuing model. Several closed form expressions on posterior inference and prediction are presented which can be readily implemented using standard spreadsheet tools. Previous work in this direction resulted in non-existence of posterior moments. A way out is suggested. Interval estimates and tests of hypothesis on performance measures are also presented.  相似文献   

8.
    
We compare real-time density forecasts for the euro area using three DSGE models. The benchmark is the Smets and Wouters model, and its forecasts of real GDP growth and inflation are compared with those from two extensions. The first adds financial frictions and expands the observables to include a measure of the external finance premium. The second allows for the extensive labor-market margin and adds the unemployment rate to the observables. The main question that we address is whether these extensions improve the density forecasts of real GDP and inflation and their joint forecasts up to an eight-quarter horizon. We find that adding financial frictions leads to a deterioration in the forecasts, with the exception of longer-term inflation forecasts and the period around the Great Recession. The labor market extension improves the medium- to longer-term real GDP growth and shorter- to medium-term inflation forecasts weakly compared with the benchmark model.  相似文献   

9.
The current context of the “significance test controversy” is first briefly discussed. Then experimental studies about the use of null hypothesis significance tests by scientific researchers and applied statisticians are presented. The misuses of these tests are reconsidered as judgmental adjustments revealing researchers’requirements towards statistical inference. Lastly alternative methods are considered. Consequently we automatically ask ourselves “won't the Bayesian choice be unavoidable?”  相似文献   

10.
    
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

11.
Croston’s method is generally viewed as being superior to exponential smoothing when the demand is intermittent, but it has the drawbacks of bias and an inability to deal with obsolescence, where the demand for an item ceases altogether. Several variants have been reported, some of which are unbiased on certain types of demand, but only one recent variant addresses the problem of obsolescence. We describe a new hybrid of Croston’s method and Bayesian inference called Hyperbolic-Exponential Smoothing, which is unbiased on non-intermittent and stochastic intermittent demand, decays hyperbolically when obsolescence occurs, and performs well in experiments.  相似文献   

12.
王琦  史宪铭  郭勇  王广彦 《价值工程》2010,29(31):154-155
针对温压战斗部爆炸冲击波毁伤的试验鉴定问题,提出冲击波毁伤的试验鉴定即为对毁伤参数所服从分布以及分布参数的统计推断问题,并进一步给出温压战斗部毁伤小子样试验鉴定的基本思路,重点对利用Bayes理论开展统计推断的方法进行了研究,介绍了两种验前分布的求取方法,并对Bayes估计理论进行了深入探讨,介绍了点估计和区间估计求取的具体方法。  相似文献   

13.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:1,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

14.
Let X = (X 1,...,X n ) be a sample from an unknown cumulative distribution function F defined on the real line . The problem of estimating the cumulative distribution function F is considered using a decision theoretic approach. No assumptions are imposed on the unknown function F. A general method of finding a minimax estimator d(t;X) of F under the loss function of a general form is presented. The method of solution is based on converting the nonparametric problem of searching for minimax estimators of a distribution function to the parametric problem of searching for minimax estimators of the probability of success for a binomial distribution. The solution uses also the completeness property of the class of monotone decision procedures in a monotone decision problem. Some special cases of the underlying problem are considered in the situation when the loss function in the nonparametric problem is defined by a weighted squared, LINEX or a weighted absolute error.  相似文献   

15.
    
This work proposes a new Shewhart-type control chart of the Weibull percentile (i.e. the reliable life) as a practical example of a product attained following the Data Technology (DT) approach. DT is briefly introduced as a new discipline defined apart from Information Technology (IT). Following this approach, some specific Bayes estimators are selected from literature and then used to build the above new chart. These estimators allow to improve the control making use of any available kind of data (statistical and non-statistical). The operative steps of DT approach are fully explained. The results are illustrated by means of a real applicative example.  相似文献   

16.
    
In the context of regularly varying tails, we first analyze a generalization of the classical Hill estimator of a positive tail index, with members that are not asymptotically more efficient than the original one. This has led us to propose alternative classical tail index estimators, that may perform asymptotically better than the Hill estimator. As the improvement is not really significant, we also propose generalized jackknife estimators based on any two members of these two classes. These generalized jackknife estimators are compared with the Hill estimator and other reduced-bias estimators available in the literature, asymptotically, and for finite samples, through the use of Monte Carlo simulation. The finite-sample behaviour of the new reduced-bias estimators is also illustrated through a practical example in the field of finance.  相似文献   

17.
    
Bayesian experimental design is a fast growing area of research with many real‐world applications. As computational power has increased over the years, so has the development of simulation‐based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.  相似文献   

18.
    
Small area estimation (SAE) entails estimating characteristics of interest for domains, often geographical areas, in which there may be few or no samples available. SAE has a long history and a wide variety of methods have been suggested, from a bewildering range of philosophical standpoints. We describe design-based and model-based approaches and models that are specified at the area level and at the unit level, focusing on health applications and fully Bayesian spatial models. The use of auxiliary information is a key ingredient for successful inference when response data are sparse, and we discuss a number of approaches that allow the inclusion of covariate data. SAE for HIV prevalence, using data collected from a Demographic Health Survey in Malawi in 2015–2016, is used to illustrate a number of techniques. The potential use of SAE techniques for outcomes related to coronavirus disease 2019 is discussed.  相似文献   

19.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

20.
Dallas R. Wingo 《Metrika》1993,40(1):203-210
Summary This paper develops mathematical and computational methodology for fitting, by the method of maximum likelihood (ML), the Burr Type XII distribution to multiply (or progressively) censored life test data. Mathematical expressions are given for approximating the asymptotic variances and covariances of the ML estimates (MLEs) of the parameters of the Burr Type XII distribution. A rigorous mathematical analysis is undertaken to investigate the existence and uniqueness of the MLEs for arbitrary sample data. The methodology of this paper is applied to progressively censored sample data arising in a life test experiment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号