首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper develops estimators for dynamic microeconomic models with serially correlated unobserved state variables using sequential Monte Carlo methods to estimate the parameters and the distribution of the unobservables. If persistent unobservables are ignored, the estimates can be subject to a dynamic form of sample selection bias. We focus on single‐agent dynamic discrete‐choice models and dynamic games of incomplete information. We propose a full‐solution maximum likelihood procedure and a two‐step method and use them to estimate an extended version of the capital replacement model of Rust with the original data and in a Monte Carlo study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This paper develops a pure simulation-based approach for computing maximum likelihood estimates in latent state variable models using Markov Chain Monte Carlo methods (MCMC). Our MCMC algorithm simultaneously evaluates and optimizes the likelihood function without resorting to gradient methods. The approach relies on data augmentation, with insights similar to simulated annealing and evolutionary Monte Carlo algorithms. We prove a limit theorem in the degree of data augmentation and use this to provide standard errors and convergence diagnostics. The resulting estimator inherits the sampling asymptotic properties of maximum likelihood. We demonstrate the approach on two latent state models central to financial econometrics: a stochastic volatility and a multivariate jump-diffusion models. We find that convergence to the MLE is fast, requiring only a small degree of augmentation.  相似文献   

3.
This paper develops a new model for the analysis of stochastic volatility (SV) models. Since volatility is a latent variable in SV models, it is difficult to evaluate the exact likelihood. In this paper, a non-linear filter which yields the exact likelihood of SV models is employed. Solving a series of integrals in this filter by piecewise linear approximations with randomly chosen nodes produces the likelihood, which is maximized to obtain estimates of the SV parameters. A smoothing algorithm for volatility estimation is also constructed. Monte Carlo experiments show that the method performs well with respect to both parameter estimates and volatility estimates. We illustrate our model by analysing daily stock returns on the Tokyo Stock Exchange. Since the method can be applied to more general models, the SV model is extended so that several characteristics of daily stock returns are allowed, and this more general model is also estimated. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
A simulation-based non-linear filter is developed for prediction and smoothing in non-linear and/or non-normal structural time-series models. Recursive algorithms of weighting functions are derived by applying Monte Carlo integration. Through Monte Carlo experiments, it is shown that (1) for a small number of random draws (or nodes) our simulation-based density estimator using Monte Carlo integration (SDE) performs better than Kitagawa's numerical integration procedure (KNI), and (2) SDE and KNI give less biased parameter estimates than the extended Kalman filter (EKF). Finally, an estimation of per capita final consumption data is taken as an application to the non-linear filtering problem.  相似文献   

5.
This paper introduces a quasi maximum likelihood approach based on the central difference Kalman filter to estimate non‐linear dynamic stochastic general equilibrium (DSGE) models with potentially non‐Gaussian shocks. We argue that this estimator can be expected to be consistent and asymptotically normal for DSGE models solved up to third order. These properties are verified in a Monte Carlo study for a DSGE model solved to second and third order with structural shocks that are Gaussian, Laplace distributed, or display stochastic volatility. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
This paper concerns estimating parameters in a high-dimensional dynamic factor model by the method of maximum likelihood. To accommodate missing data in the analysis, we propose a new model representation for the dynamic factor model. It allows the Kalman filter and related smoothing methods to evaluate the likelihood function and to produce optimal factor estimates in a computationally efficient way when missing data is present. The implementation details of our methods for signal extraction and maximum likelihood estimation are discussed. The computational gains of the new devices are presented based on simulated data sets with varying numbers of missing entries.  相似文献   

7.
This paper provides an approach to estimation and inference for nonlinear conditional mean panel data models, in the presence of cross‐sectional dependence. We modify Pesaran's (Econometrica, 2006, 74(4), 967–1012) common correlated effects correction to filter out the interactive unobserved multifactor structure. The estimation can be carried out using nonlinear least squares, by augmenting the set of explanatory variables with cross‐sectional averages of both linear and nonlinear terms. We propose pooled and mean group estimators, derive their asymptotic distributions, and show the consistency and asymptotic normality of the coefficients of the model. The features of the proposed estimators are investigated through extensive Monte Carlo experiments. We also present two empirical exercises. The first explores the nonlinear relationship between banks' capital ratios and riskiness. The second estimates the nonlinear effect of national savings on national investment in OECD countries depending on countries' openness.  相似文献   

8.
Dynamic model averaging (DMA) has become a very useful tool with regards to dealing with two important aspects of time-series analysis, namely, parameter instability and model uncertainty. An important component of DMA is the Kalman filter. It is used to filter out the latent time-varying regression coefficients of the predictive regression of interest, and produce the model predictive likelihood, which is needed to construct the probability of each model in the model set. To apply the Kalman filter, one must write the model of interest in linear state–space form. In this study, we demonstrate that the state–space representation has implications on out-of-sample prediction performance, and the degree of shrinkage. Using Monte Carlo simulations as well as financial data at different sampling frequencies, we document that the way in which the current literature tends to formulate the candidate time-varying parameter predictive regression in linear state–space form ignores empirical features that are often present in the data at hand, namely, predictor persistence and predictor endogeneity. We suggest a straightforward way to account for these features in the DMA setting. Results using the widely applied Goyal and Welch (2008) dataset document that modifying the DMA framework as we suggest has a bearing on equity premium point prediction performance from a statistical as well as an economic viewpoint.  相似文献   

9.
Sequential Data Assimilation Techniques in Oceanography   总被引:8,自引:0,他引:8  
We review recent developments of sequential data assimilation techniques used in oceanography to integrate spatio-temporal observations into numerical models describing physical and ecological dynamics. Theoretical aspects from the simple case of linear dynamics to the general case of nonlinear dynamics are described from a geostatistical point-of-view. Current methods derived from the Kalman filter are presented from the least complex to the most general and perspectives for nonlinear estimation by sequential importance resampling filters are discussed. Furthermore an extension of the ensemble Kalman filter to transformed Gaussian variables is presented and illustrated using a simplified ecological model. The described methods are designed for predicting over geographical regions using a high spatial resolution under the practical constraint of keeping computing time sufficiently low to obtain the prediction before the fact. Therefore the paper focuses on widely used and computationally efficient methods.  相似文献   

10.
This paper presents estimation methods for dynamic nonlinear models with correlated random effects (CRE) when having unbalanced panels. Unbalancedness is often encountered in applied work and ignoring it in dynamic nonlinear models produces inconsistent estimates even if the unbalancedness process is completely at random. We show that selecting a balanced panel from the sample can produce efficiency losses or even inconsistent estimates of the average marginal effects. We allow the process that determines the unbalancedness structure of the data to be correlated with the permanent unobserved heterogeneity. We discuss how to address the estimation by maximizing the likelihood function for the whole sample and also propose a Minimum Distance approach, which is computationally simpler and asymptotically equivalent to the Maximum Likelihood estimation. Our Monte Carlo experiments and empirical illustration show that the issue is relevant. Our proposed solutions perform better both in terms of bias and RMSE than the approaches that ignore the unbalancedness or that balance the sample.  相似文献   

11.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
We develop a sequential Monte Carlo (SMC) algorithm for estimating Bayesian dynamic stochastic general equilibrium (DSGE) models; wherein a particle approximation to the posterior is built iteratively through tempering the likelihood. Using two empirical illustrations consisting of the Smets and Wouters model and a larger news shock model we show that the SMC algorithm is better suited for multimodal and irregular posterior distributions than the widely used random walk Metropolis–Hastings algorithm. We find that a more diffuse prior for the Smets and Wouters model improves its marginal data density and that a slight modification of the prior for the news shock model leads to drastic changes in the posterior inference about the importance of news shocks for fluctuations in hours worked. Unlike standard Markov chain Monte Carlo (MCMC) techniques; the SMC algorithm is well suited for parallel computing. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Many cases of strategic interaction between agents involve a continuous set of choices. It is natural to model these problems as continuous space games. Consequently, the population of agents playing the game will be represented with a density function defined over the continuous set of strategy choices. Simulating evolutionary dynamics on continuous strategy spaces is a challenging problem. The classic approach of discretizing the strategy space is ineffective for multidimensional strategy spaces. We present a principled approach to simulation of adaptive dynamics in continuous space games using sequential Monte Carlo methods. Sequential Monte Carlo methods use a set of weighted random samples, also named particles to represent density functions over multidimensional spaces. Sequential Monte Carlo methods provide computationally efficient ways of computing the evolution of probability density functions. We employ resampling and smoothing steps to prevent particle degeneration problem associated with particle estimates. The resulting algorithm can be interpreted as an agent based simulation with elements of natural selection, regression to mean and mutation. We illustrate the performance of the proposed simulation technique using two examples: continuous version of the repeated prisoner dilemma game and evolution of bidding functions in first-price closed-bid auctions.  相似文献   

14.
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner.  相似文献   

15.
This paper discusses the estimation of a class of nonlinear state space models including nonlinear panel data models with autoregressive error components. A health economics example illustrates the usefulness of such models. For the approximation of the likelihood function, nonlinear filtering algorithms developed in the time‐series literature are considered. Because of the relatively simple structure of these models, a straightforward algorithm based on sequential Gaussian quadrature is suggested. It performs very well both in the empirical application and a Monte Carlo study for ordered logit and binary probit models with an AR(1) error component. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

17.
This note develops the Bayesian estimation of the parameters of Solow's distributed lag model with implicit autocorrelation of disturbances in its autoregressive form. The estimation technique extends Chetty's method for independent disturbances. The results of some Monte Carlo experiments are given comparing point estimates from the posterior distributions with the maximum likelihood estimates. The characteristics of the Bayesian and maximum likelihood estimates are very similar.  相似文献   

18.
Nonlinear taxes create econometric difficulties when estimating labor supply functions. One estimation method that tackles these problems accounts for the complete form of the budget constraint and uses the maximum likelihood method to estimate parameters. Another method linearizes budget constraints and uses instrumental variables techniques. Using Monte Carlo simulations I investigate the small-sample properties of these estimation methods and how they are affected by measurement errors in independent variables. No estimator is uniquely best. Hence, in actual estimation the choice of estimator should depend on the sample size and type of measurement errors in the data. Complementing actual estimates with a Monte Carlo study of the estimator used, given the type of measurement errors that characterize the data, would often help interpreting the estimates. This paper shows how such a study can be performed.  相似文献   

19.
We show that use of ordinary least-squares to explore relationships involving firm-level stock returns as the dependent variable in the face of structured dependence between individual firms leads to an endogeneity problem. This in turn leads to biased and inconsistent least-squares estimates. A maximum likelihood estimation procedure that will produce consistent estimates in these situations is illustrated. This is done using methods that have been developed to deal with spatial dependence between regional data observations, which can be applied to situations involving firm-level observations that exhibit a structure of dependence. In addition, we show how to correctly interpret maximum likelihood parameter estimates from these models in the context of firm-level dependence, and provide a Monte Carlo as well as applied illustration of the magnitude of bias that can arise.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号