首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 6 毫秒
1.
本文认为,资产组合P的均值-方差有效性能够通过有效性指标ρ的后验分布获得。根据贝叶斯法则,我们选取上证综合指数为样本获得了ρ的后验分布。研究结论表明,如果资产组合集中包含有无风险资产,当先验信仰以标准无信息先验(标准扩散先验)的形式出现时,上证综合指数缺乏有效性,表明资产组合集中加入无风险资产将降低资产组合的均值-方差有效性,这与理论推导结果是一致的。  相似文献   

2.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

3.
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

4.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models.  相似文献   

5.
本文将贝叶斯非线性分层模型应用于基于不同业务线的多元索赔准备金评估中,设计了一种合适的模型结构,将非线性分层模型与贝叶斯方法结合起来,应用WinBUGS软件对精算实务中经典流量三角形数据进行建模分析,并使用MCMC方法得到了索赔准备金完整的预测分布。这种方法扩展并超越了已有多元评估方法中最佳估计和预测均方误差估计的研究范畴。在贝叶斯框架下结合后验分布实施推断对非寿险公司偿付能力监管和行业决策具有重要作用。  相似文献   

6.
The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missing data and chose a Bayesian data augmentation method. We were able to summarize, rank, and compare the effects of each of the 10 factors on medication adherence. This is a promising methodological development in the synthesis of qualitative and quantitative research.  相似文献   

7.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

8.
The risk-return trade-off in human capital investment   总被引:6,自引:0,他引:6  
In this paper, we analyze investments in human capital in a way which is standard for financial assets, but not (yet) for human capital assets. We study mean-variance plots of human capital assets. We compare the properties of human capital returns using a performance measure and by using tests for mean-variance spanning. Fields differ strongly not only in common rates of return, but also in return per unit of risk. We identify a range of educations that are efficient in terms of investment goods, and a range of educations that may be chosen for consumption purposes.  相似文献   

9.
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as ‘approximate Bayesian forecasting’. The four key issues explored are: (i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; (ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; (iii) the performance of approximate Bayesian forecasting in state space models; and (iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.  相似文献   

10.
The likelihood ratio (LR) is largely used to evaluate the relative weight of forensic data regarding two hypotheses, and for its assessment, Bayesian methods are widespread in the forensic field. However, the Bayesian ‘recipe’ for the LR presented in most of the literature consists of plugging‐in Bayesian estimates of the involved nuisance parameters into a frequentist‐defined LR: frequentist and Bayesian methods are thus mixed, giving rise to solutions obtained by hybrid reasoning. This paper provides the derivation of a proper Bayesian approach to assess LRs for the ‘rare type match problem’, the situation in which the expert wants to evaluate a match between the DNA profile of a suspect and that of a trace from the crime scene, and this profile has never been observed before in the database of reference. LR assessment using the two most popular Bayesian models (beta‐binomial and Dirichlet‐multinomial) is discussed and compared with corresponding plug‐in versions.  相似文献   

11.
This paper investigates the economic significance of mean-variance spanning tests using three classical statistical tests in a unified framework. I show how to compute confidence intervals about the Sharpe ratios of tangent portfolios, the variance of return of minimum variance portfolios, as well as the certainty equivalent utility gains. I apply this statistical framework to the question of whether US investors should diversify internationally. The analysis suggests that a strong statistical rejection of the hypothesis that there is no improvement in the minimum variance portfolio’s standard deviation of return does not imply that there are no significant economic benefits to be made in terms of a substantial risk reduction. These results have important implications for empirical tests of mean-variance spanning as well as empirical assets pricing tests and minimum variance bounds on stochastic discount factors.  相似文献   

12.
《Journal of econometrics》2005,124(2):311-334
We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification. C-code along with an R interface for our algorithms is publicly available.1  相似文献   

13.
Markov Chain Monte Carlo (MCMC) methods are used to sample from complicated multivariate distributions with normalizing constants that may not be computable in practice and from which direct sampling is not feasible. A fundamental problem is to determine convergence of the chains. Propp & Wilson (1996) devised a Markov chain algorithm called Coupling From The Past (CFTP) that solves this problem, as it produces exact samples from the target distribution and determines automatically how long it needs to run. Exact sampling by CFTP and other methods is currently a thriving research topic. This paper gives a review of some of these ideas, with emphasis on the CFTP algorithm. The concepts of coupling and monotone CFTP are introduced, and results on the running time of the algorithm presented. The interruptible method of Fill (1998) and the method of Murdoch & Green (1998) for exact sampling for continuous distributions are presented. Novel simulation experiments are reported for exact sampling from the Ising model in the setting of Bayesian image restoration, and the results are compared to standard MCMC. The results show that CFTP works at least as well as standard MCMC, with convergence monitored by the method of Raftery & Lewis (1992, 1996).  相似文献   

14.
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions.
The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches.
This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.  相似文献   

15.
Computationally efficient methods for Bayesian analysis of seemingly unrelated regression (SUR) models are described and applied that involve the use of a direct Monte Carlo (DMC) approach to calculate Bayesian estimation and prediction results using diffuse or informative priors. This DMC approach is employed to compute Bayesian marginal posterior densities, moments, intervals and other quantities, using data simulated from known models and also using data from an empirical example involving firms’ sales. The results obtained by the DMC approach are compared to those yielded by the use of a Markov Chain Monte Carlo (MCMC) approach. It is concluded from these comparisons that the DMC approach is worthwhile and applicable to many SUR and other problems.  相似文献   

16.
The most representative machine learning techniques are implemented for modeling and forecasting U.S. economic activity and recessions in particular. An elaborate, comprehensive, and comparative framework is employed in order to estimate U.S. recession probabilities. The empirical analysis explores the predictive content of numerous well-followed macroeconomic and financial indicators, but also introduces a set of less-studied predictors. The predictive ability of the underlying models is evaluated using a plethora of statistical evaluation metrics. The results strongly support the application of machine learning over more standard econometric techniques in the area of recession prediction. Specifically, the analysis indicates that penalized Logit regression models, k-nearest neighbors, and Bayesian generalized linear models largely outperform ‘original’ Logit/Probit models in the prediction of U.S. recessions, as they achieve higher predictive accuracy across long-, medium-, and short-term forecast horizons.  相似文献   

17.
18.
The most common equity mandate in the financial industry is to try to outperform an externally given benchmark with known weights. The standard quantitative approach to do this is to optimize the portfolio over short time horizons consecutively, using one-period models. However, it is not clear that this approach actually yields good performance in the long run. We provide a theoretical justification to this methodology by verifying that applying the one-period benchmark-relative mean-variance portfolio, i.e., the industry standard optimal portfolio, continuously is in fact the solution to a specific continuous time portfolio optimization problem: a maximum expected utility problem for an investor who is compared against a benchmark and evaluates her performance based on exponential utility at a deterministic future date.  相似文献   

19.
The Rule of Three, its Variants and Extensions   总被引:1,自引:0,他引:1  
The Rule of Three (R3) states that 3/ n  is an approximate 95% upper limit for the binomial parameter, when there are no events in  n  trials. This rule is based on the one-sided Clopper–Pearson exact limit, but it is shown that none of the other popular frequentist methods lead to it. It can be seen as a special case of a Bayesian R3, but it is shown that among common choices for a non-informative prior, only the Bayes–Laplace and Zellner priors conform with it. R3 has also incorrectly been extended to 3 being a "reasonable" upper limit for the number of events in a future experiment of the same (large) size, when, instead, it applies to the binomial mean. In Bayesian estimation, such a limit should follow from the posterior predictive distribution. This method seems to give more natural results than—though when based on the Bayes–Laplace prior technically converges with—the method of prediction limits, which indicates between 87.5% and 93.75% confidence for this extended R3. These results shed light on R3 in general, suggest an extended Rule of Four for a number of events, provide a unique comparison of Bayesian and frequentist limits, and support the choice of the Bayes–Laplace prior among non-informative contenders.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号