首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 795 毫秒
1.
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

2.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

3.
In this paper we investigate a spatial Durbin error model with finite distributed lags and consider the Bayesian MCMC estimation of the model with a smoothness prior. We study also the corresponding Bayesian model selection procedure for the spatial Durbin error model, the spatial autoregressive model and the matrix exponential spatial specification model. We derive expressions of the marginal likelihood of the three models, which greatly simplify the model selection procedure. Simulation results suggest that the Bayesian estimates of high order spatial distributed lag coefficients are more precise than the maximum likelihood estimates. When the data is generated with a general declining pattern or a unimodal pattern for lag coefficients, the spatial Durbin error model can better capture the pattern than the SAR and the MESS models in most cases. We apply the procedure to study the effect of right to work (RTW) laws on manufacturing employment.  相似文献   

4.
We propose a Bayesian estimation procedure for the generalized Bass model that is used in product diffusion models. Our method forecasts product sales early based on previous similar markets; that is, we obtain pre-launch forecasts by analogy. We compare our forecasting proposal to traditional estimation approaches, and alternative new product diffusion specifications. We perform several simulation exercises, and use our method to forecast the sales of room air conditioners, BlackBerry handheld devices, and compressed natural gas. The results show that our Bayesian proposal provides better predictive performances than competing alternatives when little or no historical data are available, which is when sales projections are the most useful.  相似文献   

5.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

6.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

7.
Abstract.  In this paper, we review and unite the literatures on returns to schooling and Bayesian model averaging. We observe that most studies seeking to estimate the returns to education have done so using particular (and often different across researchers) model specifications. Given this, we review Bayesian methods which formally account for uncertainty in the specification of the model itself, and apply these techniques to estimate the economic return to a college education. The approach described in this paper enables us to determine those model specifications which are most favored by the given data, and also enables us to use the predictions obtained from all of the competing regression models to estimate the returns to schooling. The reported precision of such estimates also account for the uncertainty inherent in the model specification. Using U.S. data from the National Longitudinal Survey of Youth (NLSY), we also revisit several 'stylized facts' in the returns to education literature and examine if they continue to hold after formally accounting for model uncertainty.  相似文献   

8.
We consider classes of multivariate distributions which can model skewness and are closed under orthogonal transformations. We review two classes of such distributions proposed in the literature and focus our attention on a particular, yet quite flexible, subclass of one of these classes. Members of this subclass are defined by affine transformations of univariate (skewed) distributions that ensure the existence of a set of coordinate axes along which there is independence and the marginals are known analytically. The choice of an appropriate m-dimensional skewed distribution is then restricted to the simpler problem of choosing m univariate skewed distributions. We introduce a Bayesian model comparison setup for selection of these univariate skewed distributions. The analysis does not rely on the existence of moments (allowing for any tail behaviour) and uses equivalent priors on the common characteristics of the different models. Finally, we apply this framework to multi-output stochastic frontiers using data from Dutch dairy farms.  相似文献   

9.
Statistical Decision Problems and Bayesian Nonparametric Methods   总被引:1,自引:0,他引:1  
This paper considers parametric statistical decision problems conducted within a Bayesian nonparametric context. Our work was motivated by the realisation that typical parametric model selection procedures are essentially incoherent. We argue that one solution to this problem is to use a flexible enough model in the first place, a model that will not be checked no matter what data arrive. Ideally, one would use a nonparametric model to describe all the uncertainty about the density function generating the data. However, parametric models are the preferred choice for many statisticians, despite the incoherence involved in model checking, incoherence that is quite often ignored for pragmatic reasons. In this paper we show how coherent parametric inference can be carried out via decision theory and Bayesian nonparametrics. None of the ingredients discussed here are new, but our main point only becomes evident when one sees all priors—even parametric ones—as measures on sets of densities as opposed to measures on finite-dimensional parameter spaces.  相似文献   

10.
Two principle approaches to the modelling of competing risks in discrete time are considered. In the first approach which is based on the separation between failure and cause specific response only the causes of failure are considered as ordered. The second approach which is based on the conditional response given interval [a t-1,ar) is reached allows for an ordering of causes of failureand the category ‘no failure’. The latter approach is shown to be more general. It is shown that the considered competing risks models may be estimated within the framework of generalized linear models. A data set concerning duration of unemployment illustrates the approaches.  相似文献   

11.
This paper presents a Bayesian approach to regression models with time-varying parameters, or state vector models. Unlike most previous research in this field the model allows for multiple observations for each time period. Bayesian estimators and their properties are developed for the general case where the regression parameters follow an ARMA(s,q) process over time. This methodology is applied to the estimation of time-varying price elasticity for a consumer product, using biweekly sales data for eleven domestic markets. The parameter estimates and forecasting performance of the model are compared with various alternative approaches.  相似文献   

12.
In this paper, we develop and compare two alternative approaches for calculating the effect of the actual intake when treatments are randomized, but compliance with the assignment in the treatment arm is less than perfect for reasons that are correlated with the outcome. The approaches are based on different identification assumptions about these unobserved confounders. In the first approach, which stems from [Sommer, A., Zeger, S., 1991. On estimating efficacy in clinical trials. Statistics in Medicine 10, 45–52], the unobserved confounders are modeled by a discrete indicator variable that represents subject-type, defined in terms of the potential intake in the face of each possible assignment. In the second approach, confounding is modeled without reference to subject-type in the spirit of the Roy model. Because the two models are non-nested, and model comparison and assessment of the approaches in a real data setting is one of our central goals, we formulate the discussion from a Bayesian perspective, comparing the two models in terms of marginal likelihoods and Bayes factors, and in terms of inferences about the treatment effects. The latter we calculate from a predictive perspective in a way that is different from that in the literature, where typically only a point summary of that effect is calculated. Our real data analysis focuses on the JOBS II eligibility trial that was implemented to test the effectiveness of a job search seminar in decreasing the negative mental health effects commonly associated with job loss. We provide a comparative analysis of the data from the two approaches with prior distributions that are both reasonable in the context of the data and comparable across the model specifications. We show that the approaches can lead to different evaluations of the treatment.  相似文献   

13.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

14.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method.  相似文献   

15.
This paper considers the identification and estimation of an extension of Roy’s model (1951) of sectoral choice, which includes a non-pecuniary component in the selection equation and allows for uncertainty on potential earnings. We focus on the identification of the non-pecuniary component, which is key to disentangling the relative importance of monetary incentives versus preferences in the context of sorting across sectors. By making the most of the structure of the selection equation, we show that this component is point identified from the knowledge of the covariate effects on earnings, as soon as one covariate is continuous. Notably, and in contrast to most results on the identification of Roy models, this implies that identification can be achieved without any exclusion restriction nor large support condition on the covariates. As a by-product, bounds are obtained on the distribution of the ex ante   monetary returns. We propose a three-stage semiparametric estimation procedure for this model, which yields root-nn consistent and asymptotically normal estimators. Finally, we apply our results to the educational context, by providing new evidence from French data that non-pecuniary factors are a key determinant of higher education attendance decisions.  相似文献   

16.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

17.
Statistical analysis of autoregressive-moving average (ARMA) models is an important non-standard problem. No classical approach is widely accepted; legitimacy for most classical approaches is based solely on asymptotic grounds, while small sample sizes are common. The only obstacle to the Bayesian approach are designing a structure through which prior information can be incorporated and designing a practical computational method. The objective of this work is to overcome these two obstacles. In addition to the standard results, the Bayesian approach gives a different method of determining the order of the ARMA model, that is (p, q).  相似文献   

18.
We develop a Bayesian random compressed multivariate heterogeneous autoregressive (BRC-MHAR) model to forecast the realized covariance matrices of stock returns. The proposed model randomly compresses the predictors and reduces the number of parameters. We also construct several competing multivariate volatility models with the alternative shrinkage methods to compress the parameter’s dimensions. We compare the forecast performances of the proposed models with the competing models based on both statistical and economic evaluations. The results of statistical evaluation suggest that the BRC-MHAR models have the better forecast precision than the competing models for the short-term horizon. The results of economic evaluation suggest that the BRC-MHAR models are superior to the competing models in terms of the average return, the Shape ratio and the economic value.  相似文献   

19.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
We model a regression density flexibly so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the components changing smoothly as a function of the covariates. The model extends the existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need fewer components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a self-adjusting mechanism that prevents overfitting and makes it feasible to fit flexible high-dimensional surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities, but also that special cases of the general model are interesting models for economic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号