首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Model specification for state space models is a difficult task as one has to decide which components to include in the model and to specify whether these components are fixed or time-varying. To this aim a new model space MCMC method is developed in this paper. It is based on extending the Bayesian variable selection approach which is usually applied to variable selection in regression models to state space models. For non-Gaussian state space models stochastic model search MCMC makes use of auxiliary mixture sampling. We focus on structural time series models including seasonal components, trend or intervention. The method is applied to various well-known time series.  相似文献   

2.
There exists a common belief among researchers and regional policy makers that the actual central system of Aeropuertos Españoles y Navegación Aérea (AENA) should be changed to one more decentralized where airport managers could have more autonomy. The main objective of this article is to evaluate the efficiency of the Spanish airports using Markov chain Monte Carlo (MCMC) simulation to estimate a stochastic frontier analysis (SFA) model. Our results show the existence of a significant level of inefficiency in airport operations. Additionally, we provide efficient marginal cost estimates for each airport which also cast some doubts about the current pricing practices.  相似文献   

3.
This paper analyzes the productivity of farms across 370 municipalities in the Center-West region of Brazil. A stochastic frontier model with a latent spatial structure is proposed to account for possible unknown geographical variation of the outputs. The paper compares versions of the model that include the latent spatial effect in the mean of output or as a variable that conditions the distribution of inefficiency, include or not observed municipal variables, and specify independent normal or conditional autoregressive priors for the spatial effects. The Bayesian paradigm is used to estimate the proposed models. As the resultant posterior distributions do not have a closed form, stochastic simulation techniques are used to obtain samples from them. Two model comparison criteria provide support for including the latent spatial effects, even after considering covariates at the municipal level. Models that ignore the latent spatial effects produce significantly different rankings of inefficiencies across agents.
Alexandra M. SchmidtEmail: URL: www.dme.ufrj.br/∼alex
  相似文献   

4.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models.  相似文献   

5.
Graphical models provide a powerful and flexible approach to the analysis of complex problems in genetics. While task-specific software may be extremely efficient for any particular analysis, it is often difficult to adapt to new computational challenges. By viewing these genetic applications in a more general framework, many problems can be handled by essentially the same software. This is advantageous in an area where fast methodological development is essential. Once a method has been fully developed and tested, problem-specific software may then be required. The aim of this paper is to illustrate the potential use of a graphical model approach to genetic analyses by taking a very simple and well-understood problem by way of example.  相似文献   

6.
In many applications involving time-varying parameter VARs, it is desirable to restrict the VAR coefficients at each point in time to be non-explosive. This is an example of a problem where inequality restrictions are imposed on states in a state space model. In this paper, we describe how existing MCMC algorithms for imposing such inequality restrictions can work poorly (or not at all) and suggest alternative algorithms which exhibit better performance. Furthermore, we show that previous algorithms involve an approximation relating to a key prior integrating constant. Our algorithms are exact, not involving this approximation. In an application involving a commonly used U.S. data set, we present evidence that the algorithms proposed in this paper work well.  相似文献   

7.
    
Bayesian modification indices are presented that provide information for the process of model evaluation and model modification. These indices can be used to investigate the improvement in a model if fixed parameters are re-specified as free parameters. The indices can be seen as a Bayesian analogue to the modification indices commonly used in a frequentist framework. The aim is to provide diagnostic information for multi-parameter models where the number of possible model violations and the related number of alternative models is too large to render estimation of each alternative practical. As an example, the method is applied to an item response theory (IRT) model, that is, to the two-parameter model. The method is used to investigate differential item functioning and violations of the assumption of local independence.  相似文献   

8.
This paper is concerned with the Bayesian analysis of stochastic volatility (SV) models with leverage. Specifically, the paper shows how the often used Kim et al. [1998. Stochastic volatility: likelihood inference and comparison with ARCH models. Review of Economic Studies 65, 361–393] method that was developed for SV models without leverage can be extended to models with leverage. The approach relies on the novel idea of approximating the joint distribution of the outcome and volatility innovations by a suitably constructed ten-component mixture of bivariate normal distributions. The resulting posterior distribution is summarized by MCMC methods and the small approximation error in working with the mixture approximation is corrected by a reweighting procedure. The overall procedure is fast and highly efficient. We illustrate the ideas on daily returns of the Tokyo Stock Price Index. Finally, extensions of the method are described for superposition models (where the log-volatility is made up of a linear combination of heterogenous and independent autoregressions) and heavy-tailed error distributions (student and log-normal).  相似文献   

9.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

10.
    
This article reviews the application of some advanced Monte Carlo techniques in the context of multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations, which can be biassed in some sense, for instance, by using the discretization of an associated probability law. The MLMC approach works with a hierarchy of biassed approximations, which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider some Markov chain Monte Carlo and sequential Monte Carlo methods, which have been introduced in the literature, and we describe different strategies that facilitate the application of MLMC within these methods.  相似文献   

11.
    
We use numerous high-frequency transaction data sets to evaluate the forecasting performances of several dynamic ordinal-response time series models with generalized autoregressive conditional heteroscedasticity (GARCH). The specifications account for three components: leverage effects, in-mean effects and moving average error terms. We estimate the model parameters by developing Markov chain Monte Carlo algorithms. Our empirical analysis shows that the proposed ordinal-response GARCH models achieve better point and density forecasts than standard benchmarks.  相似文献   

12.
Forecasting and turning point predictions in a Bayesian panel VAR model   总被引:2,自引:0,他引:2  
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model, which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for hierarchical and for Minnesota-type priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.  相似文献   

13.
This paper analyzes the drivers of financial distress that were experienced by small Italian cooperative banks during the latest deep recession, focusing mainly on the importance of bank capital as a predictor of bankruptcy for Italian nonprofit banks. The analysis aims to build an early-warning model that is suitable for this type of bank.The results reveal non-monotonic effects of bank capital on the probability of failure. In contrast to distress models for for-profit banks, non-performing loans, profitability, liquidity, and management quality have a negligible predictive value. The findings also show that unreserved impaired loans have an important impact on the probability of bank distress. Moreover, the loan–loss ratio provision on substandard loans constitutes a suitable antibody against bank distress. Overall, the results are robust in terms of both the methodology (i.e., frequentist and Bayesian approaches) and the sample used (i.e., cooperative banks in Italy and euro-area countries).  相似文献   

14.
Abstract

We attempt to clarify a number of points regarding use of spatial regression models for regional growth analysis. We show that as in the case of non-spatial growth regressions, the effect of initial regional income levels wears off over time. Unlike the non-spatial case, long-run regional income levels depend on: own region as well as neighbouring region characteristics, the spatial connectivity structure of the regions, and the strength of spatial dependence. Given this, the search for regional characteristics that exert important influences on income levels or growth rates should take place using spatial econometric methods that account for spatial dependence as well as own and neighbouring region characteristics, the type of spatial regression model specification, and weight matrix. The framework adopted here illustrates a unified approach for dealing with these issues.  相似文献   

15.
    
We develop in this paper a novel portfolio selection framework with a feature of double robustness in both return distribution modeling and portfolio optimization. While predicting the future return distributions always represents the most compelling challenge in investment, any underlying distribution can be always well approximated by utilizing a mixture distribution, if we are able to ensure that the component list of a mixture distribution includes all possible distributions corresponding to the scenario analysis of potential market modes. Adopting a mixture distribution enables us to (1) reduce the problem of distribution prediction to a parameter estimation problem in which the mixture weights of a mixture distribution are estimated under a Bayesian learning scheme and the corresponding credible regions of the mixture weights are obtained as well and (2) harmonize information from different channels, such as historical data, market implied information and investors׳ subjective views. We further formulate a robust mean-CVaR portfolio selection problem to deal with the inherent uncertainty in predicting the future return distributions. By employing the duality theory, we show that the robust portfolio selection problem via learning with a mixture model can be reformulated as a linear program or a second-order cone program, which can be effectively solved in polynomial time. We present the results of simulation analyses and primary empirical tests to illustrate a significance of the proposed approach and demonstrate its pros and cons.  相似文献   

16.
We develop a minimum amount of theory of Markov chains at as low a level of abstraction as possible in order to prove two fundamental probability laws for standard Markov chain Monte Carlo algorithms:
1. The law of large numbers explains why the algorithm works: it states that the empirical means calculated from the samples converge towards their "true" expected values, viz. expectations with respect to the invariant distribution of the associated Markov chain (=the target distribution of the simulation).
2. The central limit theorem expresses the deviations of the empirical means from their expected values in terms of asymptotically normally distributed random variables. We also present a formula and an estimator for the associated variance.  相似文献   

17.
    
Pair trading is a statistical arbitrage strategy used on similar assets with dissimilar valuations. We utilize smooth transition heteroskedastic models with a second-order logistic function to generate trading entry and exit signals and suggest two pair trading strategies: the first uses the upper and lower threshold values in the proposed model as trading entry and exit signals, while the second strategy instead takes one-step-ahead quantile forecasts obtained from the same model. We employ Bayesian Markov chain Monte Carlo sampling methods for updating the estimates and quantile forecasts. As an illustration, we conduct a simulation study and empirical analysis of the daily stock returns of 36 stocks from U.S. stock markets. We use the minimum square distance method to select ten stock pairs, choose additional five pairs consisting of two companies in the same industrial sector, and then finally consider pair trading profits for two out-of-sample periods in 2014 within a six-month time frame as well as for the entire year. The proposed strategies yield average annualized returns of at least 35.5% without a transaction cost and at least 18.4% with a transaction cost.  相似文献   

18.
    
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

19.
We develop methods for analysing the 'interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely non-parametric study of interactions is possible using an analogue of the K -function. Alternatively one may assume a semi-parametric model in which a (parametrically specified) homogeneous Markov point process is subjected to (non-parametric) inhomogeneous independent thinning. The effectiveness of these approaches is tested on datasets representing the positions of trees in forests.  相似文献   

20.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号