首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear, non‐Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis–Hastings algorithm or in importance sampling. Our method provides a computationally more efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic intensity and stochastic volatility models based on Ornstein–Uhlenbeck processes. For our empirical study, we analyse the performance of our methods for corporate default panel data and stock index returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

3.
We propose the construction of copulas through the inversion of nonlinear state space models. These copulas allow for new time series models that have the same serial dependence structure as a state space model, but with an arbitrary marginal distribution, and flexible density forecasts. We examine the time series properties of the copulas, outline serial dependence measures, and estimate the models using likelihood-based methods. Copulas constructed from three example state space models are considered: a stochastic volatility model with an unobserved component, a Markov switching autoregression, and a Gaussian linear unobserved component model. We show that all three inversion copulas with flexible margins improve the fit and density forecasts of quarterly U.S. broad inflation and electricity inflation.  相似文献   

4.
In this paper, a method is introduced for approximating the likelihood for the unknown parameters of a state space model. The approximation converges to the true likelihood as the simulation size goes to infinity. In addition, the approximating likelihood is continuous as a function of the unknown parameters under rather general conditions. The approach advocated is fast and robust, and it avoids many of the pitfalls associated with current techniques based upon importance sampling. We assess the performance of the method by considering a linear state space model, comparing the results with the Kalman filter, which delivers the true likelihood. We also apply the method to a non-Gaussian state space model, the stochastic volatility model, finding that the approach is efficient and effective. Applications to continuous time finance models and latent panel data models are considered. Two different multivariate approaches are proposed. The neoclassical growth model is considered as an application.  相似文献   

5.
We propose a new dynamic copula model in which the parameter characterizing dependence follows an autoregressive process. As this model class includes the Gaussian copula with stochastic correlation process, it can be viewed as a generalization of multivariate stochastic volatility models. Despite the complexity of the model, the decoupling of marginals and dependence parameters facilitates estimation. We propose estimation in two steps, where first the parameters of the marginal distributions are estimated, and then those of the copula. Parameters of the latent processes (volatilities and dependence) are estimated using efficient importance sampling. We discuss goodness‐of‐fit tests and ways to forecast the dependence parameter. For two bivariate stock index series, we show that the proposed model outperforms standard competing models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
This paper develops a maximum likelihood (ML) method to estimate partially observed diffusion models based on data sampled at discrete times. The method combines two techniques recently proposed in the literature in two separate steps. In the first step, the closed form approach of Aït-Sahalia (2008) is used to obtain a highly accurate approximation to the joint transition probability density of the latent and the observed states. In the second step, the efficient importance sampling technique of Richard and Zhang (2007) is used to integrate out the latent states, thereby yielding the likelihood function. Using both simulated and real data, we show that the proposed ML method works better than alternative methods. The new method does not require the underlying diffusion to have an affine structure and does not involve infill simulations. Therefore, the method has a wide range of applicability and its computational cost is moderate.  相似文献   

7.
In generalized autoregressive conditional heteroskedastic (GARCH) models, the standard identifiability assumption that the variance of the iid process is equal to 1 can be replaced by an alternative moment assumption. We show that, for estimating the original specification based on the standard identifiability assumption, efficiency gains can be expected from using a quasi-maximum likelihood (QML) estimator based on a non Gaussian density and a reparameterization based on an alternative identifiability assumption. A test allowing to determine whether a reparameterization is needed, that is, whether the more efficient QMLE is obtained with a non Gaussian density, is proposed.  相似文献   

8.
In this paper we describe methods for predicting distributions of outcome gains in the framework of a latent variable selection model. We describe such procedures for Student‐t selection models and a finite mixture of Gaussian selection models. Importantly, our algorithms for fitting these models are simple to implement in practice, and also permit learning to take place about the non‐identified cross‐regime correlation parameter. Using data from High School and Beyond, we apply our methods to determine the impact of dropping out of high school on a math test score taken at the senior year of high school. Our results show that selection bias is an important feature of this data, that our beliefs about this non‐identified correlation are updated from the data, and that generalized models of selectivity offer an improvement over the ‘textbook’ Gaussian model. Further, our results indicate that on average dropping out of high school has a large negative impact on senior‐year test scores. However, for those individuals who actually drop out of high school, the act of dropping out of high school does not have a significantly negative impact on test scores. This suggests that policies aimed at keeping students in school may not be as beneficial as first thought, since those individuals who must be induced to stay in school are not the ones who benefit significantly (in terms of test scores) from staying in school. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
The predictive likelihood is useful for ranking models in forecast comparison exercises using Bayesian inference. We discuss how it can be estimated, by means of marzginalization, for any subset of the observables in linear Gaussian state‐space models. We compare macroeconomic density forecasts for the euro area of a DSGE model to those of a DSGE‐VAR, a BVAR and a multivariate random walk over 1999:Q1–2011:Q4. While the BVAR generally provides superior forecasts, its performance deteriorates substantially with the onset of the Great Recession. This is particularly notable for longer‐horizon real GDP forecasts, where the DSGE and DSGE‐VAR models perform better. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Despite certain advances for non‐randomized response (NRR) techniques in the past 6 years, the existing non‐randomized crosswise and triangular models have several limitations in practice. In this paper, I propose a new NRR model, called the parallel model with a wider application range. Asymptotical properties of the maximum likelihood estimator (and its modified version) for the proportion of interest are explored. Theoretical comparisons with the crosswise and triangular models show that the parallel model is always more efficient than the two existing NRR models for most of the possible parameter ranges. Bayesian methods for analyzing survey data from the parallel model are developed. A case study on college students' premarital sexual behavior at Wuhan and a case study on plagiarism at the University of Hong Kong are conducted and are used to illustrate the proposed methods. © 2014 The Authors. Statistica Neerlandica © 2014 VVS.  相似文献   

11.
This paper introduces the notion of common non‐causal features and proposes tools to detect them in multivariate time series models. We argue that the existence of co‐movements might not be detected using the conventional stationary vector autoregressive (VAR) model as the common dynamics are present in the non‐causal (i.e. forward‐looking) component of the series. We show that the presence of a reduced rank structure allows to identify purely causal and non‐causal VAR processes of order P>1 even in the Gaussian likelihood framework. Hence, usual test statistics and canonical correlation analysis can be applied, where either lags or leads are used as instruments to determine whether the common features are present in either the backward‐ or forward‐looking dynamics of the series. The proposed definitions of co‐movements are also valid for the mixed causal—non‐causal VAR, with the exception that a non‐Gaussian maximum likelihood estimator is necessary. This means however that one loses the benefits of the simple tools proposed. An empirical analysis on Brent and West Texas Intermediate oil prices illustrates the findings. No short run co‐movements are found in a conventional causal VAR, but they are detected when considering a purely non‐causal VAR.  相似文献   

12.
We use panel probit models with unobserved heterogeneity, state dependence and serially correlated errors in order to analyse the determinants and the dynamics of current account reversals for a panel of developing and emerging countries. The likelihood‐based inference of these models requires high‐dimensional integration for which we use efficient importance sampling. Our results suggest that current account balance, terms of trades, foreign reserves and concessional debt are important determinants of current account reversal. Furthermore, we find strong evidence for serial dependence in the occurrence of reversals. While the likelihood criterion suggest that state dependence and serially correlated errors are essentially observationally equivalent, measures of predictive performance provide support for the hypothesis that the serial dependence is mainly due to serially correlated country‐specific shocks related to local political or macroeconomic events.  相似文献   

13.
We develop a panel count model with a latent spatio‐temporal heterogeneous state process for monthly severe crimes at the census‐tract level in Pittsburgh, Pennsylvania. Our dataset combines Uniform Crime Reporting data with socio‐economic data. The likelihood is estimated by efficient importance sampling techniques for high‐dimensional spatial models. Estimation results confirm the broken‐windows hypothesis whereby less severe crimes are leading indicators for severe crimes. In addition to ML parameter estimates, we compute several other statistics of interest for law enforcement such as spatio‐temporal elasticities of severe crimes with respect to less severe crimes, out‐of‐sample forecasts, predictive distributions and validation test statistics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we discuss the analysis of data from population‐based case‐control studies when there is appreciable non‐response. We develop a class of estimating equations that are relatively easy to implement. For some important special cases, we also provide efficient semi‐parametric maximum‐likelihood methods. We compare the methods in a simulation study based on data from the Women's Cardiovascular Health Study discussed in Arbogast et al. (Estimating incidence rates from population‐based case‐control studies in the presence of non‐respondents, Biometrical Journal 44, 227–239, 2002).  相似文献   

15.
The class of p2 models is suitable for modeling binary relation data in social network analysis. A p2 model is essentially a regression model for bivariate binary responses, featuring within‐dyad dependence and correlated crossed random effects to represent heterogeneity of actors. Despite some desirable properties, these models are used less frequently in empirical applications than other models for network data. A possible reason for this is due to the limited possibilities for this model for accounting for (and explicitly modeling) structural dependence beyond the dyad as can be done in exponential random graph models. Another motive, however, may lie in the computational difficulties existing to estimate such models by means of the methods proposed in the literature, such as joint maximization methods and Bayesian methods. The aim of this article is to investigate maximum likelihood estimation based on the Laplace approximation approach, that can be refined by importance sampling. Practical implementation of such methods can be performed in an efficient manner, and the article provides details on a software implementation using R . Numerical examples and simulation studies illustrate the methodology.  相似文献   

16.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

17.
In this paper, we consider the use of auxiliary and paradata for dealing with non‐response and measurement errors in household surveys. Three over‐arching purposes are distinguished: response enhancement, statistical adjustment, and bias exploration. Attention is given to the varying focus at the different phases of statistical production from collection, processing to analysis, and how to select and utilize the useful auxiliary and paradata. Administrative register data provide the richest source of relevant auxiliary information, in addition to data collected in previous surveys and censuses. Due to their importance in effective dealings with non‐sampling errors, one should make every effort to increase their availability in the statistical system and, at the same time, to develop efficient statistical methods that capitalize on the combined data sources.  相似文献   

18.
Many new statistical models may enjoy better interpretability and numerical stability than traditional models in survival data analysis. Specifically, the threshold regression (TR) technique based on the inverse Gaussian distribution is a useful alternative to the Cox proportional hazards model to analyse lifetime data. In this article we consider a semi‐parametric modelling approach for TR and contribute implementational and theoretical details for model fitting and statistical inferences. Extensive simulations are carried out to examine the finite sample performance of the parametric and non‐parametric estimates. A real example is analysed to illustrate our methods, along with a careful diagnosis of model assumptions.  相似文献   

19.
20.
This paper is concerned with the statistical inference on seemingly unrelated varying coefficient partially linear models. By combining the local polynomial and profile least squares techniques, and estimating the contemporaneous correlation, we propose a class of weighted profile least squares estimators (WPLSEs) for the parametric components. It is shown that the WPLSEs achieve the semiparametric efficiency bound and are asymptotically normal. For the non‐parametric components, by applying the undersmoothing technique, and taking the contemporaneous correlation into account, we propose an efficient local polynomial estimation. The resulting estimators are shown to have mean‐squared errors smaller than those estimators that neglect the contemporaneous correlation. In addition, a class of variable selection procedures is developed for simultaneously selecting significant variables and estimating unknown parameters, based on the non‐concave penalized and weighted profile least squares techniques. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures perform as efficiently as if one knew the true submodels. The proposed methods are evaluated using wide simulation studies and applied to a set of real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号