首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

3.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

4.
本文认为,资产组合P的均值-方差有效性能够通过有效性指标ρ的后验分布获得。根据贝叶斯法则,我们选取上证综合指数为样本获得了ρ的后验分布。研究结论表明,如果资产组合集中包含有无风险资产,当先验信仰以标准无信息先验(标准扩散先验)的形式出现时,上证综合指数缺乏有效性,表明资产组合集中加入无风险资产将降低资产组合的均值-方差有效性,这与理论推导结果是一致的。  相似文献   

5.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

6.
Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter θ based on the data is measured by the quantity q (θ) = l ( , θ) w (θ), where l ( , θ) is the likelihood function and w (θ) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q (θ) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example.  相似文献   

7.
The paper presents a Bayesian estimation of CES production functions. The proposed estimation method is easier to use than methods so far developed and it allows a direct comparison with the maximum likelihood estimator. A Bayesian highest posterior density interval inference is made to examine the validity of the Cobb–Douglas representation. The method is applied to Japanese macro data and to micro data on two Japanese manufacturing plants. The results indicate that the current practice of employing the Cobb–Douglas form a priori ought to be corrected. In addition, the micro data present results paradoxical to the commonly held view of capital intensity.  相似文献   

8.
This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal–Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two‐equation structural model drawn from the labour and returns to schooling literatures. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

10.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

11.
Dynamic processes are crucial in many empirical fields, such as in oceanography, climate science, and engineering. Processes that evolve through time are often well described by systems of ordinary differential equations (ODEs). Fitting ODEs to data has long been a bottleneck because the analytical solution of general systems of ODEs is often not explicitly available. We focus on a class of inference techniques that uses smoothing to avoid direct integration. In particular, we develop a Bayesian smooth-and-match strategy that approximates the ODE solution while performing Bayesian inference on the model parameters. We incorporate in the strategy two main sources of uncertainty: the noise level of the measured observations and the model approximation error. We assess the performance of the proposed approach in an extensive simulation study and on a canonical data set of neuronal electrical activity.  相似文献   

12.
本文将贝叶斯非线性分层模型应用于基于不同业务线的多元索赔准备金评估中,设计了一种合适的模型结构,将非线性分层模型与贝叶斯方法结合起来,应用WinBUGS软件对精算实务中经典流量三角形数据进行建模分析,并使用MCMC方法得到了索赔准备金完整的预测分布。这种方法扩展并超越了已有多元评估方法中最佳估计和预测均方误差估计的研究范畴。在贝叶斯框架下结合后验分布实施推断对非寿险公司偿付能力监管和行业决策具有重要作用。  相似文献   

13.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

14.
Bayesian inference for concave distribution functions is investigated. This is made by transforming a mixture of Dirichlet processes on the space of distribution functions to the space of concave distribution functions. We give a method for sampling from the posterior distribution using a Pólya urn scheme in combination with a Markov chain Monte Carlo algorithm. The methods are extended to estimation of concave distribution functions for incompletely observed data.  相似文献   

15.
ABSTRACT

The probability of occurrence, occurrence time and holding time make the estimation of the multiresource, multidimensional, transmissible and dynamic characteristics of supply-chain risk more valuable to achieving the goal of risk assessment. A dynamic Bayesian network model of supply-chain risk is constructed to describe the property of supply-chain risk; the Bayesian inference tool is then used to estimate the corresponding parameters by maximum likelihood and inference for supply-chain risks. It is concluded that supply-chain risk changes with time and would converge at a certain stable interval, occurrence time and holding time satisfy several Poisson processes.  相似文献   

16.
We propose a general class of models and a unified Bayesian inference methodology for flexibly estimating the density of a response variable conditional on a possibly high-dimensional set of covariates. Our model is a finite mixture of component models with covariate-dependent mixing weights. The component densities can belong to any parametric family, with each model parameter being a deterministic function of covariates through a link function. Our MCMC methodology allows for Bayesian variable selection among the covariates in the mixture components and in the mixing weights. The model’s parameterization and variable selection prior are chosen to prevent overfitting. We use simulated and real data sets to illustrate the methodology.  相似文献   

17.
The Components of Output Growth: A Stochastic Frontier Analysis   总被引:1,自引:0,他引:1  
This paper uses Bayesian stochastic frontier methods to decompose output change into technical, efficiency and input changes. In the context of macroeconomic growth exercises, which typically involve small and noisy data sets, we argue that stochastic frontier methods are useful since they incorporate measurement error and assume a (flexible) parametric form for the production relationship. These properties enable us to calculate measures of uncertainty associated with the decomposition and minimize the risk of overfitting the noise in the data. Tools for Bayesian inference in such models are developed. An empirical investigation using data from 17 OECD countries for 10 years illustrates the practicality and usefulness of our approach.  相似文献   

18.
This survey reviews the existing literature on the most relevant Bayesian inference methods for univariate and multivariate GARCH models. The advantages and drawbacks of each procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures. The paper makes emphasis on recent Bayesian non‐parametric approaches for GARCH models that avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly assume infinite mixture of Gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey presents an illustration using real data to show the flexibility and usefulness of the non‐parametric approach.  相似文献   

19.
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号