首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

2.
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. It presents two alternative approaches that can be implemented using Gibbs sampling methods in a straightforward way and which allow one to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. A simulation study shows that the variable selection approaches tend to outperform existing Bayesian model averaging techniques in terms of both in-sample predictive performance and computational efficiency. The alternative approaches are compared in an empirical application using data on economic growth for European NUTS-2 regions.  相似文献   

3.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

4.
Forecasting and turning point predictions in a Bayesian panel VAR model   总被引:2,自引:0,他引:2  
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model, which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for hierarchical and for Minnesota-type priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.  相似文献   

5.
Abstract

This article considers autoregressive (SAR) models. We method to estimate the parameters of likelihood (ML) method. Our Bayesian by the Monte Carlo studies. We found the efficient as the ML estimators.  相似文献   

6.
Trends and cycles in economic time series: A Bayesian approach   总被引:1,自引:0,他引:1  
Trends and cyclical components in economic time series are modeled in a Bayesian framework. This enables prior notions about the duration of cycles to be used, while the generalized class of stochastic cycles employed allows the possibility of relatively smooth cycles being extracted. The posterior distributions of such underlying cycles can be very informative for policy makers, particularly with regard to the size and direction of the output gap and potential turning points. From the technical point of view a contribution is made in investigating the most appropriate prior distributions for the parameters in the cyclical components and in developing Markov chain Monte Carlo methods for both univariate and multivariate models. Applications to US macroeconomic series are presented.  相似文献   

7.
In this paper we study the Candy model, a marked point process introduced by S toica et al. (2000) . We prove Ruelle and local stability, investigate its Markov properties, and discuss how the model may be sampled. Finally, we consider estimation of the model parameters and present a simulation study.  相似文献   

8.
Abstract

We attempt to clarify a number of points regarding use of spatial regression models for regional growth analysis. We show that as in the case of non-spatial growth regressions, the effect of initial regional income levels wears off over time. Unlike the non-spatial case, long-run regional income levels depend on: own region as well as neighbouring region characteristics, the spatial connectivity structure of the regions, and the strength of spatial dependence. Given this, the search for regional characteristics that exert important influences on income levels or growth rates should take place using spatial econometric methods that account for spatial dependence as well as own and neighbouring region characteristics, the type of spatial regression model specification, and weight matrix. The framework adopted here illustrates a unified approach for dealing with these issues.  相似文献   

9.
A new version of the local scale model of Shephard (1994) is presented. Its features are identically distributed evolution equation disturbances, the incorporation of in-the-mean effects, and the incorporation of variance regressors. A Bayesian posterior simulator and a new simulation smoother are presented. The model is applied to publicly available daily exchange rate and asset return series, and is compared with t-GARCH and Lognormal stochastic volatility formulations using Bayes factors.  相似文献   

10.
Graphical models provide a powerful and flexible approach to the analysis of complex problems in genetics. While task-specific software may be extremely efficient for any particular analysis, it is often difficult to adapt to new computational challenges. By viewing these genetic applications in a more general framework, many problems can be handled by essentially the same software. This is advantageous in an area where fast methodological development is essential. Once a method has been fully developed and tested, problem-specific software may then be required. The aim of this paper is to illustrate the potential use of a graphical model approach to genetic analyses by taking a very simple and well-understood problem by way of example.  相似文献   

11.
In this paper we show how it is possible to develop a Bayesian framework for analyzing structural models for treatment response data without the joint distribution of the potential outcomes. That this is possible has not been noticed in the literature. We also discuss the computation of the model marginal likelihood and present recipes for finding relevant treatment effects, averaged over both parameters and covariates. As compared to an approach in which the counterfactuals are part of the prior-posterior analysis (as in the work to date), the approach we suggest is simpler in terms of the required prior inputs, computational burden and extensibility to more complex settings.  相似文献   

12.
13.
We describe a method for estimating the marginal likelihood, based on Chib (1995) and C hib and Jeliazkov (2001) , when simulation from the posterior distribution of the model parameters is by the accept–reject Metropolis–Hastings (ARMH) algorithm. The method is developed for one-block and multiple-block ARMH algorithms and does not require the (typically) unknown normalizing constant of the proposal density. The problem of calculating the numerical standard error of the estimates is also considered and a procedure based on batch means is developed. Two examples, dealing with a multinomial logit model and a Gaussian regression model with non-conjugate priors, are provided to illustrate the efficiency and applicability of the method.  相似文献   

14.
We develop methods for analysing the 'interaction' or dependence between points in a spatial point pattern, when the pattern is spatially inhomogeneous. Completely non-parametric study of interactions is possible using an analogue of the K -function. Alternatively one may assume a semi-parametric model in which a (parametrically specified) homogeneous Markov point process is subjected to (non-parametric) inhomogeneous independent thinning. The effectiveness of these approaches is tested on datasets representing the positions of trees in forests.  相似文献   

15.
Effective linkage detection and gene mapping requires analysis of data jointly on members of extended pedigrees, jointly at multiple genetic markers. Exact likelihood computation is then often infeasible, but Markov chain Monte Carlo (MCMC) methods permit estimation of posterior probabilities of genome sharing among relatives, conditional upon marker data. In principle, MCMC also permits estimation of linkage analysis location score curves, but in practice effective MCMC samplers are hard to find. Although the whole-meiosis Gibbs sampler (M-sampler) performs well in some cases, for extended pedigrees and tightly linked markers better samplers are needed. However, using the M-sampler as a proposal distribution in a Metropolis-Hastings algorithm does allow genetic interference to be incorporated into the analysis.  相似文献   

16.
We describe a flexible geo-additive Bayesian survival model that controls, simultaneously, for spatial dependence and possible nonlinear or time-varying effects of other variables. Inference is fully Bayesian and is based on recently developed Markov Chain Monte Carlo techniques. In illustrating the model we introduce a spatial dimension in modelling under-five mortality among Malawian children using data from Malawi Demographic and Health Survey of 2000. The results show that district-level socioeconomic characteristics are important determinants of childhood mortality. More importantly, a separate spatial process produces district clustering of childhood mortality indicating the importance of spatial effects. The visual nature of the maps presented in this paper highlights relationships that would, otherwise, be overlooked in standard methods.  相似文献   

17.
Pair trading is a statistical arbitrage strategy used on similar assets with dissimilar valuations. We utilize smooth transition heteroskedastic models with a second-order logistic function to generate trading entry and exit signals and suggest two pair trading strategies: the first uses the upper and lower threshold values in the proposed model as trading entry and exit signals, while the second strategy instead takes one-step-ahead quantile forecasts obtained from the same model. We employ Bayesian Markov chain Monte Carlo sampling methods for updating the estimates and quantile forecasts. As an illustration, we conduct a simulation study and empirical analysis of the daily stock returns of 36 stocks from U.S. stock markets. We use the minimum square distance method to select ten stock pairs, choose additional five pairs consisting of two companies in the same industrial sector, and then finally consider pair trading profits for two out-of-sample periods in 2014 within a six-month time frame as well as for the entire year. The proposed strategies yield average annualized returns of at least 35.5% without a transaction cost and at least 18.4% with a transaction cost.  相似文献   

18.
We propose and examine a panel data model for isolating the effect of a treatment, taken once at baseline, from outcomes observed over subsequent time periods. In the model, the treatment intake and outcomes are assumed to be correlated, due to unobserved or unmeasured confounders. Intake is partly determined by a set of instrumental variables and the confounding on unobservables is modeled in a flexible way, varying both by time and treatment state. Covariate effects are assumed to be subject-specific and potentially correlated with other covariates. Estimation and inference is by Bayesian methods that are implemented by tuned Markov chain Monte Carlo methods. Because our analysis is based on the framework developed by Chib [2004. Analysis of treatment response data without the joint distribution of counterfactuals. Journal of Econometrics, in press], the modeling and estimation does not involve either the unknowable joint distribution of the potential outcomes or the missing counterfactuals. The problem of model choice through marginal likelihoods and Bayes factors is also considered. The methods are illustrated in simulation experiments and in an application dealing with the effect of participation in high school athletics on future labor market earnings.  相似文献   

19.
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are: (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fit and studied. The performance of our model, in relation to various multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5% and 1% value-at-risk (VaR) measures of four pre-defined portfolios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号