首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models.  相似文献   

2.
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are: (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fit and studied. The performance of our model, in relation to various multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5% and 1% value-at-risk (VaR) measures of four pre-defined portfolios.  相似文献   

3.
In this paper, we introduce a threshold stochastic volatility model with explanatory variables. The Bayesian method is considered in estimating the parameters of the proposed model via the Markov chain Monte Carlo (MCMC) algorithm. Gibbs sampling and Metropolis–Hastings sampling methods are used for drawing the posterior samples of the parameters and the latent variables. In the simulation study, the accuracy of the MCMC algorithm, the sensitivity of the algorithm for model assumptions, and the robustness of the posterior distribution under different priors are considered. Simulation results indicate that our MCMC algorithm converges fast and that the posterior distribution is robust under different priors and model assumptions. A real data example was analyzed to explain the asymmetric behavior of stock markets.  相似文献   

4.
This paper analyzes the productivity of farms across 370 municipalities in the Center-West region of Brazil. A stochastic frontier model with a latent spatial structure is proposed to account for possible unknown geographical variation of the outputs. The paper compares versions of the model that include the latent spatial effect in the mean of output or as a variable that conditions the distribution of inefficiency, include or not observed municipal variables, and specify independent normal or conditional autoregressive priors for the spatial effects. The Bayesian paradigm is used to estimate the proposed models. As the resultant posterior distributions do not have a closed form, stochastic simulation techniques are used to obtain samples from them. Two model comparison criteria provide support for including the latent spatial effects, even after considering covariates at the municipal level. Models that ignore the latent spatial effects produce significantly different rankings of inefficiencies across agents.
Alexandra M. SchmidtEmail: URL: www.dme.ufrj.br/∼alex
  相似文献   

5.
This paper proposes two types of stochastic correlation structures for Multivariate Stochastic Volatility (MSV) models, namely the constant correlation (CC) MSV and dynamic correlation (DC) MSV models, from which the stochastic covariance structures can easily be obtained. Both structures can be used for purposes of determining optimal portfolio and risk management strategies through the use of correlation matrices, and for calculating Value-at-Risk (VaR) forecasts and optimal capital charges under the Basel Accord through the use of covariance matrices. A technique is developed to estimate the DC MSV model using the Markov Chain Monte Carlo (MCMC) procedure, and simulated data show that the estimation method works well. Various multivariate conditional volatility and MSV models are compared via simulation, including an evaluation of alternative VaR estimators. The DC MSV model is also estimated using three sets of empirical data, namely Nikkei 225 Index, Hang Seng Index and Straits Times Index returns, and significant dynamic correlations are found. The Dynamic Conditional Correlation (DCC) model is also estimated, and is found to be far less sensitive to the covariation in the shocks to the indexes. The correlation process for the DCC model also appears to have a unit root, and hence constant conditional correlations in the long run. In contrast, the estimates arising from the DC MSV model indicate that the dynamic correlation process is stationary.  相似文献   

6.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

7.
Graphical models provide a powerful and flexible approach to the analysis of complex problems in genetics. While task-specific software may be extremely efficient for any particular analysis, it is often difficult to adapt to new computational challenges. By viewing these genetic applications in a more general framework, many problems can be handled by essentially the same software. This is advantageous in an area where fast methodological development is essential. Once a method has been fully developed and tested, problem-specific software may then be required. The aim of this paper is to illustrate the potential use of a graphical model approach to genetic analyses by taking a very simple and well-understood problem by way of example.  相似文献   

8.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

9.
In state–space models, parameter learning is practically difficult and is still an open issue. This paper proposes an efficient simulation-based parameter learning method. First, the approach breaks up the interdependence of the hidden states and the static parameters by marginalizing out the states using a particle filter. Second, it applies a Bayesian resample-move approach to this marginalized system. The methodology is generic and needs little design effort. Different from batch estimation methods, it provides posterior quantities necessary for full sequential inference and recursive model monitoring. The algorithm is implemented both on simulated data in a linear Gaussian model for illustration and comparison and on real data in a Lévy jump stochastic volatility model and a structural credit risk model.  相似文献   

10.
In many manufacturing and service industries, the quality department of the organization works continuously to ensure that the mean or location of the process is close to the target value. In order to understand the process, it is necessary to provide numerical statements of the processes that are being investigated. That is why the researcher needs to check the validity of the hypotheses that are concerned with some physical phenomena. It is usually assumed that the collected data behave well. However, sometimes the data may contain outliers. The presence of one or more outliers might seriously distort the statistical inference. Since the sample mean is very sensitive to outliers, this research will use the smooth adaptive (SA) estimator to estimate the population mean. The SA estimator will be used to construct testing procedures, called smooth adaptive test (SA test), for testing various null hypotheses. A Monte Carlo study is used to simulate the values of the probability of a Type I error and the power of the SA test. This is accomplished by constructing confidence intervals of the process mean by using the SA estimator and bootstrap methods. The SA test will be compared with other tests such as the normal test, t test and a nonparametric statistical method, namely, the Wilcoxon signed-rank test. Also, the cases with and without outliers will be considered. For the right-skewed distributions, the SA test is the best choice. When the population is a right-skewed distribution with one outlier, the SA test controls the probability of a Type I error better than other tests and is recommended.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号