首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We model a regression density flexibly so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the components changing smoothly as a function of the covariates. The model extends the existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need fewer components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a self-adjusting mechanism that prevents overfitting and makes it feasible to fit flexible high-dimensional surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities, but also that special cases of the general model are interesting models for economic data.  相似文献   

2.
This paper extends the conventional Bayesian mixture of normals model by permitting state probabilities to depend on observed covariates. The dependence is captured by a simple multinomial probit model. A conventional and rapidly mixing MCMC algorithm provides access to the posterior distribution at modest computational cost. This model is competitive with existing econometric models, as documented in the paper's illustrations. The first illustration studies quantiles of the distribution of earnings of men conditional on age and education, and shows that smoothly mixing regressions are an attractive alternative to nonBayesian quantile regression. The second illustration models serial dependence in the S&P 500 return, and shows that the model compares favorably with ARCH models using out of sample likelihood criteria.  相似文献   

3.
This paper develops methods of Bayesian inference in a sample selection model. The main feature of this model is that the outcome variable is only partially observed. We first present a Gibbs sampling algorithm for a model in which the selection and outcome errors are normally distributed. The algorithm is then extended to analyze models that are characterized by nonnormality. Specifically, we use a Dirichlet process prior and model the distribution of the unobservables as a mixture of normal distributions with a random number of components. The posterior distribution in this model can simultaneously detect the presence of selection effects and departures from normality. Our methods are illustrated using some simulated data and an abstract from the RAND health insurance experiment.  相似文献   

4.
This paper investigates the benefits of internet search data in the form of Google Trends for nowcasting real U.S. GDP growth in real time through the lens of mixed frequency Bayesian Structural Time Series (BSTS) models. We augment and enhance both model and methodology to make these better amenable to nowcasting with large number of potential covariates. Specifically, we allow shrinking state variances towards zero to avoid overfitting, extend the SSVS (spike and slab variable selection) prior to the more flexible normal-inverse-gamma prior which stays agnostic about the underlying model size, as well as adapt the horseshoe prior to the BSTS. The application to nowcasting GDP growth as well as a simulation study demonstrate that the horseshoe prior BSTS improves markedly upon the SSVS and the original BSTS model with the largest gains in dense data-generating-processes. Our application also shows that a large dimensional set of search terms is able to improve nowcasts early in a specific quarter before other macroeconomic data become available. Search terms with high inclusion probability have good economic interpretation, reflecting leading signals of economic anxiety and wealth effects.  相似文献   

5.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

6.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

7.
We develop in this paper a novel portfolio selection framework with a feature of double robustness in both return distribution modeling and portfolio optimization. While predicting the future return distributions always represents the most compelling challenge in investment, any underlying distribution can be always well approximated by utilizing a mixture distribution, if we are able to ensure that the component list of a mixture distribution includes all possible distributions corresponding to the scenario analysis of potential market modes. Adopting a mixture distribution enables us to (1) reduce the problem of distribution prediction to a parameter estimation problem in which the mixture weights of a mixture distribution are estimated under a Bayesian learning scheme and the corresponding credible regions of the mixture weights are obtained as well and (2) harmonize information from different channels, such as historical data, market implied information and investors׳ subjective views. We further formulate a robust mean-CVaR portfolio selection problem to deal with the inherent uncertainty in predicting the future return distributions. By employing the duality theory, we show that the robust portfolio selection problem via learning with a mixture model can be reformulated as a linear program or a second-order cone program, which can be effectively solved in polynomial time. We present the results of simulation analyses and primary empirical tests to illustrate a significance of the proposed approach and demonstrate its pros and cons.  相似文献   

8.
Peixin Zhao  Liugen Xue 《Metrika》2011,74(2):231-245
This paper focuses on variable selections for varying coefficient models when some covariates are measured with errors. We present a bias-corrected variable selection procedure by combining basis function approximations with shrinkage estimations. With appropriate selection of the tuning parameters, we establish the consistency of the variable selection procedure, and derive the optimal convergence rate of the regularized estimators. A simulation study and a real data application are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

9.
The interplay between the Bayesian and Frequentist approaches: a general nesting spatial panel-data model. Spatial Economic Analysis. An econometric framework mixing the Frequentist and Bayesian approaches is proposed in order to estimate a general nesting spatial model. First, it avoids specific dependency structures between unobserved heterogeneity and regressors, which improves mixing properties of Markov chain Monte Carlo (MCMC) procedures in the presence of unobserved heterogeneity. Second, it allows model selection based on a strong statistical framework, characteristics that are not easily introduced using a Frequentist approach. We perform some simulation exercises, finding good performance of the properties of our approach, and apply the methodology to analyse the relation between productivity and public investment in the United States.  相似文献   

10.
We develop a Bayesian semi-parametric approach to the instrumental variable problem. We assume linear structural and reduced form equations, but model the error distributions non-parametrically. A Dirichlet process prior is used for the joint distribution of structural and instrumental variable equations errors. Our implementation of the Dirichlet process prior uses a normal distribution as a base model. It can therefore be interpreted as modeling the unknown joint distribution with a mixture of normal distributions with a variable number of mixture components. We demonstrate that this procedure is both feasible and sensible using actual and simulated data. Sampling experiments compare inferences from the non-parametric Bayesian procedure with those based on procedures from the recent literature on weak instrument asymptotics. When errors are non-normal, our procedure is more efficient than standard Bayesian or classical methods.  相似文献   

11.
We focus on Bayesian model selection for the variable selection problem in large model spaces. The challenge is to search the huge model space adequately, while accurately approximating model posterior probabilities for the visited models. The issue of choice of prior distributions for the visited models is also important.  相似文献   

12.
This article considers ultrahigh-dimensional forecasting problems with survival response variables. We propose a two-step model averaging procedure for improving the forecasting accuracy of the true conditional mean of a survival response variable. The first step is to construct a class of candidate models, each with low-dimensional covariates. For this, a feature screening procedure is developed to separate the active and inactive predictors through a marginal Buckley–James index, and to group covariates with a similar index size together to form regression models with survival response variables. The proposed screening method can select active predictors under covariate-dependent censoring, and enjoys sure screening consistency under mild regularity conditions. The second step is to find the optimal model weights for averaging by adapting a delete-one cross-validation criterion, without the standard constraint that the weights sum to one. The theoretical results show that the delete-one cross-validation criterion achieves the lowest possible forecasting loss asymptotically. Numerical studies demonstrate the superior performance of the proposed variable screening and model averaging procedures over existing methods.  相似文献   

13.
In many surveys, imputation procedures are used to account for non‐response bias induced by either unit non‐response or item non‐response. Such procedures are optimised (in terms of reducing non‐response bias) when the models include covariates that are highly predictive of both response and outcome variables. To achieve this, we propose a method for selecting sets of covariates used in regression imputation models or to determine imputation cells for one or more outcome variables, using the fraction of missing information (FMI) as obtained via a proxy pattern‐mixture (PMM) model as the key metric. In our variable selection approach, we use the PPM model to obtain a maximum likelihood estimate of the FMI for separate sets of candidate imputation models and look for the point at which changes in the FMI level off and further auxiliary variables do not improve the imputation model. We illustrate our proposed approach using empirical data from the Ohio Medicaid Assessment Survey and from the Service Annual Survey.  相似文献   

14.
Model selection from several non‐nested models by using the deviance information criterion within Bayesian inference Using Gibbs Sampling (BUGS) software needs to be treated with caution. This is particularly important if one can specify a model in various mixing representations, as for the normal variance‐mean mixing distribution occurring in financial contexts. We propose a procedure to compare goodness of fit of several non‐nested models, which uses BUGS software in part.  相似文献   

15.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

16.
Model specification for state space models is a difficult task as one has to decide which components to include in the model and to specify whether these components are fixed or time-varying. To this aim a new model space MCMC method is developed in this paper. It is based on extending the Bayesian variable selection approach which is usually applied to variable selection in regression models to state space models. For non-Gaussian state space models stochastic model search MCMC makes use of auxiliary mixture sampling. We focus on structural time series models including seasonal components, trend or intervention. The method is applied to various well-known time series.  相似文献   

17.
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

18.
The paper takes up Bayesian inference in time series models when essentially nothing is known about the distribution of the dependent variable given past realizations or other covariates. It proposes the use of kernel quasi likelihoods upon which formal inference can be based. Gibbs sampling with data augmentation is used to perform the computations related to numerical Bayesian analysis of the model. The method is illustrated with artificial and real data sets.  相似文献   

19.
The financial well-being (FWB) of individuals is a topic that is becoming increasingly important across a multitude of disciplines. In this study, we use the 2016 National Financial Well-Being Survey administered by the Consumer Financial Protection Bureau to assess the determinants of an individual's FWB. We identify 144 potential covariates that could explain variation in the FWB score of individuals. The statistical methodology of choice is the Bayesian LASSO, which is a covariate selection algorithm that also allows for the importance ranking of covariates. Out of the 144 potential covariates, we find that 26 have 95% credible intervals that do not contain zero. Broadly speaking, the results show that objective measures of financial competency and psychological and sociological factors contribute the bulk of the explanatory power that help explain an individual's FWB score.  相似文献   

20.
A semiparametric two-component mixture model is considered, in which the distribution of one (primary) component is unknown and assumed symmetric. The distribution of the other component (admixture) is known. We consider three estimates for the pdf of primary component: a naive one, a symmetrized naive estimate and a symmetrized estimate with adaptive weights. Asymptotic behavior and small sample performance of the estimates are investigated. Some rules of thumb for bandwidth selection are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号