首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This article presents the empirical Bayes method for estimation of the transition probabilities of a generalized finite stationary Markov chain whose ith state is a multi-way contingency table. We use a log-linear model to describe the relationship between factors in each state. The prior knowledge about the main effects and interactions will be described by a conjugate prior. Following the Bayesian paradigm, the Bayes and empirical Bayes estimators relative to various loss functions are obtained. These procedures are illustrated by a real example. Finally, asymptotic normality of the empirical Bayes estimators are established.  相似文献   

2.
p‐Values are commonly transformed to lower bounds on Bayes factors, so‐called minimum Bayes factors. For the linear model, a sample‐size adjusted minimum Bayes factor over the class of g‐priors on the regression coefficients has recently been proposed (Held & Ott, The American Statistician 70(4), 335–341, 2016). Here, we extend this methodology to a logistic regression to obtain a sample‐size adjusted minimum Bayes factor for 2 × 2 contingency tables. We then study the relationship between this minimum Bayes factor and two‐sided p‐values from Fisher's exact test, as well as less conservative alternatives, with a novel parametric regression approach. It turns out that for all p‐values considered, the maximal evidence against the point null hypothesis is inversely related to the sample size. The same qualitative relationship is observed for minimum Bayes factors over the more general class of symmetric prior distributions. For the p‐values from Fisher's exact test, the minimum Bayes factors do on average not tend to the large‐sample bound as the sample size becomes large, but for the less conservative alternatives, the large‐sample behaviour is as expected.  相似文献   

3.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

4.
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

5.
We demonstrate the use of a Naïve Bayes model as a recession forecasting tool. The approach is closely connected with Markov-switching models and logistic regression, but also has important differences. In contrast to Markov-switching models, our Naïve Bayes model treats National Bureau of Economic Research business cycle turning points as data, rather than as hidden states to be inferred by the model. Although Naïve Bayes and logistic regression are asymptotically equivalent under certain distributional assumptions, the assumptions do not hold for business cycle data. As a result, Naïve Bayes has a larger asymptotic error rate, but converges to the error rate more quickly than logistic regression, resulting in more accurate recession forecasts with limited data. We show that Naïve Bayes outperforms competing models and the Survey of Professional Forecasters consistently for real-time recession forecasting up to 12 months in advance. These results hold under standard error measures, and also under a novel measure that varies the penalty on false signals, depending on when they occur within a cycle; for example, a false signal in the middle of an expansion is penalized more heavily than one that occurs close to a turning point.  相似文献   

6.
For contingency tables with extensive missing data, the unrestricted MLE under the saturated model, computed by the EM algorithm, is generally unsatisfactory. In this case, it may be better to fit a simpler model by imposing some restrictions on the parameter space. Perlman and Wu (1999) propose lattice conditional independence (LCI) models for contingency tables with arbitrary missing data patterns. When this LCI model fits well, the restricted MLE under the LCI model is more accurate than the unrestricted MLE under the saturated model, but not in general. Here we propose certain empirical Bayes (EB) estimators that adaptively combine the best features of the restricted and unrestricted MLEs. These EB estimators appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain. We also study a restricted EM algorithm (called the ER algorithm) with similar desirable features. Received: July 1999  相似文献   

7.
We deal with the Bayes type estimators and the maximum likelihood type estimators of both drift and volatility parameters for small diffusion processes defined by stochastic differential equations with small perturbations from high frequency data. From the viewpoint of numerical analysis, initial Bayes type estimators for both drift and volatility parameters based on reduced data are required, and adaptive maximum likelihood type estimators with the initial Bayes type estimators, which are called hybrid estimators, are proposed. The asymptotic properties of the initial Bayes type estimators based on reduced data are derived and it is shown that the hybrid estimators have asymptotic normality and convergence of moments. Furthermore, a concrete example and simulation results are given.  相似文献   

8.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

9.
Ordered data arise naturally in many fields of statistical practice. Often some sample values are unknown or disregarded due to various reasons. On the basis of some sample quantiles from the Rayleigh distribution, the problems of estimating the Rayleigh parameter, hazard rate and reliability function, and predicting future observations are addressed using a Bayesian perspective. The construction of β-content and β-expectation Bayes tolerance limits is also tackled. Under squared-error loss, Bayes estimators and predictors are deduced analytically. Exact tolerance limits are derived by solving simple nonlinear equations. Highest posterior density estimators and credibility intervals, as well as Bayes estimators and predictors under linear loss, can easily be computed iteratively.  相似文献   

10.
We propose imposing data‐driven identification constraints to alleviate the multimodality problem arising in the estimation of poorly identified dynamic stochastic general equilibrium models under non‐informative prior distributions. We also devise an iterative procedure based on the posterior density of the parameters for finding these constraints. An empirical application to the Smets and Wouters ( 2007 ) model demonstrates the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out‐of‐sample forecast comparisons as well as Bayes factors lend support to the constrained model.  相似文献   

11.
In toxicity studies, model mis‐specification could lead to serious bias or faulty conclusions. As a prelude to subsequent statistical inference, model selection plays a key role in toxicological studies. It is well known that the Bayes factor and the cross‐validation method are useful tools for model selection. However, exact computation of the Bayes factor is usually difficult and sometimes impossible and this may hinder its application. In this paper, we recommend to utilize the simple Schwarz criterion to approximate the Bayes factor for the sake of computational simplicity. To illustrate the importance of model selection in toxicity studies, we consider two real data sets. The first data set comes from a study of dietary fortification with carbonyl iron in which the Bayes factor and the cross‐validation are used to determine the number of sub‐populations in a mixture normal model. The second example involves a developmental toxicity study in which the selection of dose–response functions in a beta‐binomial model is explored.  相似文献   

12.
With the development of an MCMC algorithm, Bayesian model selection for the p 2 model for directed graphs has become possible. This paper presents an empirical exploration in using approximate Bayes factors for model selection. For a social network of Dutch secondary school pupils from different ethnic backgrounds it is investigated whether pupils report that they receive more emotional support from within their own ethnic group. Approximated Bayes factors seem to work, but considerable margins of error have to be reckoned with.  相似文献   

13.
Automated information retrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in information retrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency–inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.  相似文献   

14.
We consider a problem of selecting the best treatment in a general linear model. We look at the properties of the natural selection rule. It is shown that the natural selection rule is minimax under to “0–1” loss function and it is a Bayes rule under a monotone permutation invariant loss function with respect to a permutation invariant prior for every variance balanced design. Some other condition on the design matrix is given so that a Bayes rule with respect to a normal prior will be of simple structure.  相似文献   

15.
In empirical Bayes decision making, the Bayes empirical Bayes approach is diccussed by Gilliland and Boyer (1979). In the finite state component case, the Bayes empirical Bayes procedures are shown to have optimal properties in a fairly general setting and believed to have small sample advantage over the classical rules. The flexibility of making desirable adjustments for these decision procedures by choice of prior enables one to set a proper strategy when dealing with actual problems.
The applications of Bayes empirical Bayes procedures, however, create some interesting theoretical and computational problems as they are fairly complicated in structure. This paper gives a brief introduction into the Bayes empirical Bayes approach, and, to illustrate it, explicit results are given for testing H0: N(-1,1) against H1: N(1,1).  相似文献   

16.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

17.
A new version of the local scale model of Shephard (1994) is presented. Its features are identically distributed evolution equation disturbances, the incorporation of in-the-mean effects, and the incorporation of variance regressors. A Bayesian posterior simulator and a new simulation smoother are presented. The model is applied to publicly available daily exchange rate and asset return series, and is compared with t-GARCH and Lognormal stochastic volatility formulations using Bayes factors.  相似文献   

18.
Empirical Bayes methods for Gaussian and binomial compound decision problems involving longitudinal data are considered. A recent convex optimization reformulation of the nonparametric maximum likelihood estimator of Kiefer and Wolfowitz (Annals of Mathematical Statistics 1956; 27 : 887–906) is employed to construct nonparametric Bayes rules for compound decisions. The methods are illustrated with an application to predict baseball batting averages, and the age profile of batting performance. An important aspect of the empirical application is the general bivariate specification of the distribution of heterogeneous location and scale effects for players that exhibits a weak positive association between location and scale attributes. Prediction of players' batting averages for 2012 based on performance in the prior decade using the proposed methods shows substantially improved performance over more naive methods with more restrictive treatment of unobserved heterogeneity. Comparisons are also made with nonparametric Bayesian methods based on Dirichlet process priors, which can be viewed as a regularized, or smoothed, version of the Kiefer–Wolfowitz method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
李矫臣  张娜 《基建优化》2006,27(3):54-56
由于投资受政策、社会环境、经济环境、管理水平等诸多因素的制约,因而投资成败的不确定性极大。本文将贝叶斯决策理论应用于投资决策中,建立了投资贝叶斯风险决策模型,分析了决策模型中各种参数的确定方法,并阐述了该模型对降低决策风险的作用。在风险决策中,信息的价值可以定量,运用贝叶斯公式分析在风险决策中增大信息量,有益于降低决策风险。  相似文献   

20.
The paper develops a general Bayesian framework for robust linear static panel data models usingε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior means are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman–Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号