首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

2.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

3.
It is very common in applied frequentist ("classical") statistics to carry out a preliminary statistical (i.e. data-based) model selection by, for example, using preliminary hypothesis tests or minimizing AIC. This is usually followed by the inference of interest, using the same data, based on the assumption that the selected model had been given to us  a priori . This assumption is false and it can lead to an inaccurate and misleading inference. We consider the important case that the inference of interest is a confidence region. We review the literature that shows that the resulting confidence regions typically have very poor coverage properties. We also briefly review the closely related literature that describes the coverage properties of prediction intervals after preliminary statistical model selection. A possible motivation for preliminary statistical model selection is a wish to utilize uncertain prior information in the inference of interest. We review the literature in which the aim is to utilize uncertain prior information directly in the construction of confidence regions, without requiring the intermediate step of a preliminary statistical model selection. We also point out this aim as a future direction for research.  相似文献   

4.
Zellner (1976) proposed a regression model in which the data vector (or the error vector) is represented as a realization from the multivariate Student t distribution. This model has attracted considerable attention because it seems to broaden the usual Gaussian assumption to allow for heavier-tailed error distributions. A number of results in the literature indicate that the standard inference procedures for the Gaussian model remain appropriate under the broader distributional assumption, leading to claims of robustness of the standard methods. We show that, although mathematically the two models are different, for purposes of statistical inference they are indistinguishable. The empirical implications of the multivariate t model are precisely the same as those of the Gaussian model. Hence the suggestion of a broader distributional representation of the data is spurious, and the claims of robustness are misleading. These conclusions are reached from both frequentist and Bayesian perspectives.  相似文献   

5.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

6.
Subsampling high frequency data   总被引:1,自引:0,他引:1  
The main contribution of this paper is to propose a novel way of conducting inference for an important general class of estimators that includes many estimators of integrated volatility. A subsampling scheme is introduced that consistently estimates the asymptotic variance for an estimator, thereby facilitating inference and the construction of valid confidence intervals. The new method does not rely on the exact form of the asymptotic variance, which is useful when the latter is of complicated form. The method is applied to the volatility estimator of Aït-Sahalia et al. (2011) in the presence of autocorrelated and heteroscedastic market microstructure noise.  相似文献   

7.
Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter θ based on the data is measured by the quantity q (θ) = l ( , θ) w (θ), where l ( , θ) is the likelihood function and w (θ) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q (θ) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example.  相似文献   

8.
Receiver operating characteristic curves are widely used as a measure of accuracy of diagnostic tests and can be summarised using the area under the receiver operating characteristic curve (AUC). Often, it is useful to construct a confidence interval for the AUC; however, because there are a number of different proposed methods to measure variance of the AUC, there are thus many different resulting methods for constructing these intervals. In this article, we compare different methods of constructing Wald‐type confidence interval in the presence of missing data where the missingness mechanism is ignorable. We find that constructing confidence intervals using multiple imputation based on logistic regression gives the most robust coverage probability and the choice of confidence interval method is less important. However, when missingness rate is less severe (e.g. less than 70%), we recommend using Newcombe's Wald method for constructing confidence intervals along with multiple imputation using predictive mean matching.  相似文献   

9.
The Language of the English Biometric School   总被引:1,自引:0,他引:1  
This paper considers the language devised by Karl Pearson and his associates for discussing distributions, populations and samples, the basic language for frequentist inference. The original language—some of which is still in use—is described and also the changes it underwent under the influence of R.A. Fisher and of Russian and American mathematicians. The period covered is roughly 1890–1950.  相似文献   

10.
We describe exact inference based on group-invariance assumptions that specify various forms of symmetry in the distribution of a disturbance vector in a general nonlinear model. It is shown that such mild assumptions can be equivalently formulated in terms of exact confidence sets for the parameters of the functional form. When applied to the linear model, this exact inference provides a unified approach to a variety of parametric and distribution-free tests. In particular, we consider exact instrumental variable inference, based on symmetry assumptions. The unboundedness of exact confidence sets is related to the power to reject a hypothesis of underidentification. In a multivariate instrumental variables context, generalizations of Anderson–Rubin confidence sets are considered.  相似文献   

11.
The problem of scarcity is often talked about, but it is rarely clearly defined. In this article, two different views of scarcity are outlined: absolute and relative scarcity. These two are respectively exemplified by Malthus's and Robbins's views of scarcity. However, both of these views tend to naturalize and universalize scarcity, and thus overlook abundance and sufficiency, which are important states in the social provisioning process. It is argued that this is due to ignorance of the sociocultural causal underpinnings of scarcity, abundance, and sufficiency (SAS). The introduction of these mechanisms enables further conceptual differentiation of SAS (e.g., quasi‐, artificial‐, natural‐).  相似文献   

12.
When two surveys carried out separately in the same population have common variables, it might be desirable to adjust each survey's weights so that they give equal estimates for the common variables. This problem has been studied extensively and has often been referred to as alignment or numerical consistency. We develop a design-based empirical likelihood approach for alignment and estimation of complex parameters defined by estimating equations. We focus on a general case when a single set of adjusted weights, which can be applied to both common and non-common variables, is produced for each survey. The main contribution of the paper is to show that the impirical log-likelihood ratio statistic is pivotal in the presence of alignment constraints. This pivotal statistic can be used to test hypotheses and derive confidence regions. Hence, the empirical likelihood approach proposed for alignment possesses the self-normalisation property, under a design-based approach. The proposed approach accommodates large sampling fractions, stratification and population level auxiliary information. It is particularly well suited for inference about small domains, when data are skewed. It includes implicit adjustments when the samples considerably differ in size. The confidence regions are constructed without the need for variance estimates, joint-inclusion probabilities, linearisation and re-sampling.  相似文献   

13.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

14.
The Rule of Three, its Variants and Extensions   总被引:1,自引:0,他引:1  
The Rule of Three (R3) states that 3/ n  is an approximate 95% upper limit for the binomial parameter, when there are no events in  n  trials. This rule is based on the one-sided Clopper–Pearson exact limit, but it is shown that none of the other popular frequentist methods lead to it. It can be seen as a special case of a Bayesian R3, but it is shown that among common choices for a non-informative prior, only the Bayes–Laplace and Zellner priors conform with it. R3 has also incorrectly been extended to 3 being a "reasonable" upper limit for the number of events in a future experiment of the same (large) size, when, instead, it applies to the binomial mean. In Bayesian estimation, such a limit should follow from the posterior predictive distribution. This method seems to give more natural results than—though when based on the Bayes–Laplace prior technically converges with—the method of prediction limits, which indicates between 87.5% and 93.75% confidence for this extended R3. These results shed light on R3 in general, suggest an extended Rule of Four for a number of events, provide a unique comparison of Bayesian and frequentist limits, and support the choice of the Bayes–Laplace prior among non-informative contenders.  相似文献   

15.
In impulse response analysis estimation uncertainty is typically displayed by constructing bands around estimated impulse response functions. If they are based on the joint asymptotic distribution possibly constructed with bootstrap methods in a frequentist framework, often individual confidence intervals are simply connected to obtain the bands. Such bands are known to be too narrow and have a joint coverage probability lower than the desired one. If instead the Wald statistic is used and the joint bootstrap distribution of the impulse response coefficient estimators is taken into account and mapped into the band, it is shown that such a band is typically rather conservative. It is argued that, by using the Bonferroni method, a band can often be obtained which is smaller than the Wald band.  相似文献   

16.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

17.
This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness conditions on the error distribution it is possible to develop tests which are “nearly” efficient in the sense of Andrews et al. (2006) when identification is weak and consistent and asymptotically optimal when identification is strong. In addition, an estimator is presented which can be used in the usual way to construct valid (indeed, optimal) confidence intervals when identification is strong. The estimator is of the two stage least squares variety and is asymptotically efficient under strong identification whether or not the errors are normal.  相似文献   

18.
This paper studies the relationship between devaluation and default risks during Argentina's convertibility regime. Before default and devaluation occurred, a harder variant of the currency regime was under discussion. An often‐suggested argument among the supporters of dollarization was that the probability of default could have been reduced by removing fears of devaluation. For this to be true, default risk must be dependent on the devaluation risk. Long‐run relationships and ‘exogeneity’ are examined using a ‘cointegrating vector’ system approach. The results show that only devaluation risk can be modelled on default risk. No empirical evidence is found in favour of dollarization. Moreover, these conclusions are maintained when the information set is expanded to include the Latin American risk and Argentine macroeconomic variables.  相似文献   

19.
基于A股上市公司2020年1—4季度业绩预告归因的研究发现,公司将业绩变化归因于新冠疫情的概率呈现随季度下降的趋势,坏消息归因于新冠疫情的概率显著高于好消息,且这种差异未随季度发生显著变化。业绩预告归因反映了管理层自利性归因与新冠疫情的叠加影响,且在特定情境下投资者能够识别自利性归因。研究结论从微观经济主体和利益相关者感知的角度展示了中国抗击新冠疫情的成效,也为监管部门规范重大外生冲击下的信息披露操纵行为提供了借鉴。  相似文献   

20.
In the analysis of clustered and longitudinal data, which includes a covariate that varies both between and within clusters, a Hausman pretest is commonly used to decide whether subsequent inference is made using the linear random intercept model or the fixed effects model. We assess the effect of this pretest on the coverage probability and expected length of a confidence interval for the slope, conditional on the observed values of the covariate. This assessment has the advantages that it (i) relates to the values of this covariate at hand, (ii) is valid irrespective of how this covariate is generated, (iii) uses exact finite sample results, and (iv) results in an assessment that is determined by the values of this covariate and only two unknown parameters. For two real data sets, our conditional analysis shows that the confidence interval constructed after a Hausman pretest should not be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号