首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

2.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

3.
The paper takes up Bayesian inference in time series models when essentially nothing is known about the distribution of the dependent variable given past realizations or other covariates. It proposes the use of kernel quasi likelihoods upon which formal inference can be based. Gibbs sampling with data augmentation is used to perform the computations related to numerical Bayesian analysis of the model. The method is illustrated with artificial and real data sets.  相似文献   

4.
This paper considers how the concepts of likelihood and identification became part of Bayesian theory. This makes a nice study in the development of concepts in statistical theory. Likelihood slipped in easily but there was a protracted debate about how identification should be treated. Initially there was no agreement on whether identification involved the prior, the likelihood or the posterior.  相似文献   

5.
We propose a nonparametric Bayesian approach to estimate time‐varying grouped patterns of heterogeneity in linear panel data models. Unlike the classical approach in Bonhomme and Manresa (Econometrica, 2015, 83, 1147–1184), our approach can accommodate selection of the optimal number of groups and model estimation jointly, and also be readily extended to quantify uncertainties in the estimated group structure. Our proposed approach performs well in Monte Carlo simulations. Using our approach, we successfully replicate the estimated relationship between income and democracy in Bonhomme and Manresa and the group characteristics when we use the same number of groups. Furthermore, we find that the optimal number of groups could depend on model specifications on heteroskedasticity and discuss ways to choose models in practice.  相似文献   

6.
商业银行缓解客户排队等待策略探讨   总被引:4,自引:0,他引:4  
如今,快节奏的生活方式使得人们对排队等待愈发无法忍受。这也对诸如商业银行等服务行业的管理和服务水平提出了更高的要求。本文试图从两个方面对如何缓解商业银行的客户排队等待问题进行一些探讨。  相似文献   

7.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

8.
Bayesian networks are a versatile and powerful tool to model complex phenomena and the interplay of their components in a probabilistically principled way. Moving beyond the comparatively simple case of completely observed, static data, which has received the most attention in the literature, in this paper, we will review how Bayesian networks can model dynamic data and data with incomplete observations. Such data are the norm at the forefront of research and in practical applications, and Bayesian networks are uniquely positioned to model them due to their explainability and interpretability.  相似文献   

9.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

10.
In this paper, we derive exact explicit expressions for the single, double, triple and quadruple moments of the upper record values from a generalized Pareto distribution. We then use these expressions to compute the mean, variance, and the coefficients of skewness and kurtosis of certain linear functions of record values. Finally, we develop approximate confidence intervals for the location and scale parameters of the generalized Pareto distribution using the Edgeworth approximation and compare them with the intervals constructed through Monte Carlo simulations. Received: June 1999  相似文献   

11.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

12.
本文将贝叶斯非线性分层模型应用于基于不同业务线的多元索赔准备金评估中,设计了一种合适的模型结构,将非线性分层模型与贝叶斯方法结合起来,应用WinBUGS软件对精算实务中经典流量三角形数据进行建模分析,并使用MCMC方法得到了索赔准备金完整的预测分布。这种方法扩展并超越了已有多元评估方法中最佳估计和预测均方误差估计的研究范畴。在贝叶斯框架下结合后验分布实施推断对非寿险公司偿付能力监管和行业决策具有重要作用。  相似文献   

13.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

14.
Bayesian approaches to the estimation of DSGE models are becoming increasingly popular. Prior knowledge is normally formalized either directly on deep parameters' values (‘microprior’) or indirectly, on macroeconomic indicators, e.g. moments of observable variables (‘macroprior’). We introduce a non-parametric macroprior which is elicited from impulse response functions and assess its performance in shaping posterior estimates. We find that using a macroprior can lead to substantially different posterior estimates. We probe into the details of our result, showing that model misspecification is likely to be responsible of that. In addition, we assess to what extent the use of macropriors is impaired by the need of calibrating some hyperparameters.  相似文献   

15.
Fisher and "Student" quarreled in the early days of statistics about the design of experiments, meant to measure the difference in yield between to breeds of corn. This discussion comes down to randomization versus model building. More than half a century has passed since, but the different views remain. In this paper the discussion is put in terms of artificial randomization and natural randomization, the latter being what remains after appropriate modeling. Also the Bayesian position is discussed. An example in terms of the old corn-breeding discussion is given, showing that a simple robust model may lead to inference and experimental design that outperforms the inference from randomized experiments by far. Finally similar possibilities are suggested in statistical auditing.  相似文献   

16.
A. S. Young 《Metrika》1987,34(1):325-339
Summary We treat the model selection problem in regression as a decision problem in which the decisions are the alternative predictive distributions based on the different sub-models and the parameter space is the set of possible future values of the regressand. The loss function balances out the conflicting needs for a predictive distribution with mean close to the true value ofy but without too great a variation. The treatment is Bayesian and the criterion derived is a Bayesian generalization of Mallows (1973)C p , the Bivar criterion (Young 1982) and AIC (Akaike 1974). An application using a graphical sensitivity analysis is presented.  相似文献   

17.
The analysis of sports data, in particular football match outcomes, has always produced an immense interest among the statisticians. In this paper, we adopt the generalized Poisson difference distribution (GPDD) to model the goal difference of football matches. We discuss the advantages of the proposed model over the Poisson difference (PD) model, which was also used for the same purpose. The GPDD model, like the PD model, is based on the goal difference in each game that allows us to account for the correlation without explicitly modelling it. The main advantage of the GPDD model is its flexibility in the tails by considering shorter as well as longer tails than the PD distribution. We carry out the analysis in a Bayesian framework in order to incorporate external information, such as historical knowledge or data, through the prior distributions. We model both the mean and the variance of the goal difference and show that such a model performs considerably better than a model with a fixed variance. Finally, the proposed model is fitted to the 2012–2013 Italian Serie A football data, and various model diagnostics are carried out to evaluate the performance of the model.  相似文献   

18.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

19.
Researchers from various scientific disciplines have attempted to forecast the spread of coronavirus disease 2019 (COVID-19). The proposed epidemic prediction methods range from basic curve fitting methods and traffic interaction models to machine-learning approaches. If we combine all these approaches, we obtain the Network Inference-based Prediction Algorithm (NIPA). In this paper, we analyse a diverse set of COVID-19 forecast algorithms, including several modifications of NIPA. Among the algorithms that we evaluated, the original NIPA performed best at forecasting the spread of COVID-19 in Hubei, China and in the Netherlands. In particular, we show that network-based forecasting is superior to any other forecasting algorithm.  相似文献   

20.
This paper describes Bayesian methods for life test planning with Type II censored data from a Weibull distribution, when the Weibull shape parameter is given. We use conjugate prior distributions and criteria based on estimating a quantile of interest of the lifetime distribution. One criterion is based on a precision factor for a credibility interval for a distribution quantile and the other is based on the length of the credibility interval. We provide simple closed form expressions for the relationship between the needed number of failures and the precision criteria. Examples are used to illustrate the results.Received: October 2002 / Revised: March 2004  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号