首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Statistical Decision Problems and Bayesian Nonparametric Methods   总被引:1,自引:0,他引:1  
This paper considers parametric statistical decision problems conducted within a Bayesian nonparametric context. Our work was motivated by the realisation that typical parametric model selection procedures are essentially incoherent. We argue that one solution to this problem is to use a flexible enough model in the first place, a model that will not be checked no matter what data arrive. Ideally, one would use a nonparametric model to describe all the uncertainty about the density function generating the data. However, parametric models are the preferred choice for many statisticians, despite the incoherence involved in model checking, incoherence that is quite often ignored for pragmatic reasons. In this paper we show how coherent parametric inference can be carried out via decision theory and Bayesian nonparametrics. None of the ingredients discussed here are new, but our main point only becomes evident when one sees all priors—even parametric ones—as measures on sets of densities as opposed to measures on finite-dimensional parameter spaces.  相似文献   

2.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

3.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

4.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

5.
This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal–Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two‐equation structural model drawn from the labour and returns to schooling literatures. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
7.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

8.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

9.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

10.
In the framework of I.I.D. sampling, a general class of linear models is analyzed. Incidental parameters are shown to naturally arise in this class of models. More fundamentally, special attention is paid to the high dimensionality of the parameter space. The objective of the paper is to offer a strategy for progressively specifying a model within that class of linear models. By so doing, we aim at displaying the precise role of each assumption, at offering alternatives to unnecessarily restrictive specifications, and, thereby, at improving the robustness of the inference procedures we discuss. Decompositions of the inference process are obtained through a systematic use of (Bayesian) cuts. Maximum Likelihood Estimation and Bayesian Inference are discussed.An objective of the progressive specification is to preserve the computational tractability and the interpretability of the procedures we develop by relying on known properties of the usual multivariate regression model.  相似文献   

11.
Zellner (1976) proposed a regression model in which the data vector (or the error vector) is represented as a realization from the multivariate Student t distribution. This model has attracted considerable attention because it seems to broaden the usual Gaussian assumption to allow for heavier-tailed error distributions. A number of results in the literature indicate that the standard inference procedures for the Gaussian model remain appropriate under the broader distributional assumption, leading to claims of robustness of the standard methods. We show that, although mathematically the two models are different, for purposes of statistical inference they are indistinguishable. The empirical implications of the multivariate t model are precisely the same as those of the Gaussian model. Hence the suggestion of a broader distributional representation of the data is spurious, and the claims of robustness are misleading. These conclusions are reached from both frequentist and Bayesian perspectives.  相似文献   

12.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a general class of models and a unified Bayesian inference methodology for flexibly estimating the density of a response variable conditional on a possibly high-dimensional set of covariates. Our model is a finite mixture of component models with covariate-dependent mixing weights. The component densities can belong to any parametric family, with each model parameter being a deterministic function of covariates through a link function. Our MCMC methodology allows for Bayesian variable selection among the covariates in the mixture components and in the mixing weights. The model’s parameterization and variable selection prior are chosen to prevent overfitting. We use simulated and real data sets to illustrate the methodology.  相似文献   

14.
A novel Bayesian method for inference in dynamic regression models is proposed where both the values of the regression coefficients and the importance of the variables are allowed to change over time. We focus on forecasting and so the parsimony of the model is important for good performance. A prior is developed which allows the shrinkage of the regression coefficients to suitably change over time and an efficient Markov chain Monte Carlo method for posterior inference is described. The new method is applied to two forecasting problems in econometrics: equity premium prediction and inflation forecasting. The results show that this method outperforms current competing Bayesian methods.  相似文献   

15.
Small area estimation (SAE) entails estimating characteristics of interest for domains, often geographical areas, in which there may be few or no samples available. SAE has a long history and a wide variety of methods have been suggested, from a bewildering range of philosophical standpoints. We describe design-based and model-based approaches and models that are specified at the area level and at the unit level, focusing on health applications and fully Bayesian spatial models. The use of auxiliary information is a key ingredient for successful inference when response data are sparse, and we discuss a number of approaches that allow the inclusion of covariate data. SAE for HIV prevalence, using data collected from a Demographic Health Survey in Malawi in 2015–2016, is used to illustrate a number of techniques. The potential use of SAE techniques for outcomes related to coronavirus disease 2019 is discussed.  相似文献   

16.
We consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD). Three computationally heavy steps are involved: (i) simulating the model, (ii) estimating the likelihood and (iii) sampling from the posterior distribution of the parameters. Computational complexity of AB models implies that efficient techniques have to be used with respect to points (ii) and (iii), possibly involving approximations. We first discuss non-parametric (kernel density) estimation of the likelihood, coupled with Markov chain Monte Carlo sampling schemes. We then turn to parametric approximations of the likelihood, which can be derived by observing the distribution of the simulation outcomes around the statistical equilibria, or by assuming a specific form for the distribution of external deviations in the data. Finally, we introduce Approximate Bayesian Computation techniques for likelihood-free estimation. These allow embedding SMD methods in a Bayesian framework, and are particularly suited when robust estimation is needed. These techniques are first tested in a simple price discovery model with one parameter, and then employed to estimate the behavioural macroeconomic model of De Grauwe (2012), with nine unknown parameters.  相似文献   

17.
The paper takes up Bayesian inference in time series models when essentially nothing is known about the distribution of the dependent variable given past realizations or other covariates. It proposes the use of kernel quasi likelihoods upon which formal inference can be based. Gibbs sampling with data augmentation is used to perform the computations related to numerical Bayesian analysis of the model. The method is illustrated with artificial and real data sets.  相似文献   

18.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

19.
This paper develops methods of Bayesian inference in a sample selection model. The main feature of this model is that the outcome variable is only partially observed. We first present a Gibbs sampling algorithm for a model in which the selection and outcome errors are normally distributed. The algorithm is then extended to analyze models that are characterized by nonnormality. Specifically, we use a Dirichlet process prior and model the distribution of the unobservables as a mixture of normal distributions with a random number of components. The posterior distribution in this model can simultaneously detect the presence of selection effects and departures from normality. Our methods are illustrated using some simulated data and an abstract from the RAND health insurance experiment.  相似文献   

20.
This paper is a comment on P. C. B. Phillips, ‘To criticise the critics: an objective Bayesian analysis of stochastic trends’ [Phillips, (1991)]. Departing from the likelihood of an univariate autoregressive model different routes that lead to a posterior odds analysis of the unit root hypothesis are explored, where the differences in routes are due to the different choices of the prior. Improper priors like the uniform and the Jeffreys prior are less suited for Bayesian inference on a sharp null hypothesis as the unit root. A proper normal prior on the mean of the process is analysed and empirical results using extended Nelson-Plosser data are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号